Dataset Viewer
Auto-converted to Parquet Duplicate
ticker
stringclasses
5 values
end_date
int64
20.1M
20.3M
slope
float64
-0.03
0.03
zcr
float64
0.14
0.79
volatility
float64
0
0.09
trend_strength
float64
0
0.71
gt
stringclasses
5 values
META
20,130,213
-0.001184
0.464286
0.026523
0.044654
OSCILLATING
META
20,130,214
-0.001877
0.428571
0.026756
0.070141
OTHER
META
20,130,215
-0.002675
0.464286
0.025962
0.103021
OSCILLATING
META
20,130,219
-0.003102
0.464286
0.025919
0.119689
OSCILLATING
META
20,130,220
-0.003482
0.464286
0.025996
0.133931
OSCILLATING
META
20,130,221
-0.004204
0.464286
0.025103
0.167484
OSCILLATING
META
20,130,222
-0.004581
0.464286
0.024565
0.186475
OSCILLATING
META
20,130,225
-0.004712
0.464286
0.024394
0.193171
OSCILLATING
META
20,130,226
-0.004662
0.464286
0.024165
0.19293
OSCILLATING
META
20,130,227
-0.00484
0.5
0.023914
0.202386
OSCILLATING
META
20,130,228
-0.005056
0.5
0.024126
0.20957
OSCILLATING
META
20,130,301
-0.005161
0.464286
0.024373
0.21177
OSCILLATING
META
20,130,304
-0.005182
0.464286
0.024239
0.213785
OSCILLATING
META
20,130,305
-0.005323
0.464286
0.023135
0.230079
OSCILLATING
META
20,130,306
-0.005202
0.464286
0.0231
0.225205
OSCILLATING
META
20,130,307
-0.004751
0.5
0.024435
0.194437
OSCILLATING
META
20,130,308
-0.00435
0.535714
0.02444
0.177981
OSCILLATING
META
20,130,311
-0.003759
0.535714
0.023691
0.15866
OSCILLATING
META
20,130,312
-0.002988
0.535714
0.021852
0.136747
OSCILLATING
META
20,130,313
-0.002692
0.5
0.022002
0.122333
OSCILLATING
META
20,130,314
-0.002247
0.5
0.022001
0.102144
OSCILLATING
META
20,130,315
-0.00189
0.5
0.020953
0.090202
OSCILLATING
META
20,130,318
-0.001789
0.464286
0.018401
0.097209
OSCILLATING
META
20,130,319
-0.002008
0.5
0.017989
0.111623
OSCILLATING
META
20,130,320
-0.002243
0.5
0.018208
0.123174
OSCILLATING
META
20,130,321
-0.002364
0.5
0.01811
0.130544
OSCILLATING
META
20,130,322
-0.00253
0.5
0.018121
0.139614
OSCILLATING
META
20,130,325
-0.002824
0.5
0.018463
0.152964
OSCILLATING
META
20,130,326
-0.003111
0.5
0.017701
0.175784
OSCILLATING
META
20,130,327
-0.003347
0.5
0.018559
0.18036
OSCILLATING
META
20,130,328
-0.003555
0.5
0.018274
0.194511
OSCILLATING
META
20,130,401
-0.003591
0.464286
0.01827
0.196567
OSCILLATING
META
20,130,402
-0.003651
0.428571
0.017632
0.207046
OTHER
META
20,130,403
-0.003308
0.464286
0.018728
0.176616
OSCILLATING
META
20,130,404
-0.002836
0.464286
0.018123
0.156477
OSCILLATING
META
20,130,405
-0.002558
0.428571
0.018228
0.140338
OTHER
META
20,130,408
-0.002449
0.464286
0.018582
0.131771
OSCILLATING
META
20,130,409
-0.002362
0.428571
0.018633
0.126747
OTHER
META
20,130,410
-0.002
0.428571
0.019522
0.102437
OTHER
META
20,130,411
-0.001669
0.428571
0.019578
0.085228
OTHER
META
20,130,412
-0.001404
0.428571
0.019714
0.071194
OTHER
META
20,130,415
-0.001219
0.428571
0.0206
0.059158
OTHER
META
20,130,416
-0.000931
0.464286
0.020792
0.0448
OSCILLATING
META
20,130,417
-0.000751
0.464286
0.020874
0.035981
OSCILLATING
META
20,130,418
-0.000805
0.428571
0.02027
0.039737
OTHER
META
20,130,419
-0.000542
0.428571
0.019982
0.02713
OTHER
META
20,130,422
-0.000323
0.392857
0.020036
0.01614
OTHER
META
20,130,423
-0.000025
0.392857
0.019979
0.001254
OTHER
META
20,130,424
0.000265
0.392857
0.019432
0.013617
OTHER
META
20,130,425
0.000402
0.392857
0.019437
0.020665
OTHER
META
20,130,426
0.000717
0.392857
0.019931
0.035981
OTHER
META
20,130,429
0.000967
0.357143
0.019911
0.048552
OTHER
META
20,130,430
0.001355
0.321429
0.02059
0.065833
OTHER
META
20,130,501
0.001659
0.357143
0.02007
0.08268
OTHER
META
20,130,502
0.00212
0.392857
0.022263
0.095205
OTHER
META
20,130,503
0.002349
0.428571
0.022816
0.102931
OTHER
META
20,130,506
0.002362
0.392857
0.022944
0.102924
OTHER
META
20,130,507
0.00202
0.392857
0.023533
0.085835
OTHER
META
20,130,508
0.001725
0.392857
0.022751
0.075825
OTHER
META
20,130,509
0.001615
0.428571
0.022407
0.072092
OTHER
META
20,130,510
0.001267
0.428571
0.022578
0.056121
OTHER
META
20,130,513
0.00092
0.428571
0.022559
0.040796
OTHER
META
20,130,514
0.000582
0.428571
0.021852
0.026622
OTHER
META
20,130,515
0.000324
0.464286
0.021341
0.01517
OSCILLATING
META
20,130,516
0.000157
0.428571
0.021436
0.007323
OTHER
META
20,130,517
0.000117
0.464286
0.021171
0.005506
OSCILLATING
META
20,130,520
-0.000164
0.464286
0.021375
0.007666
OSCILLATING
META
20,130,521
-0.000518
0.464286
0.020138
0.025723
OSCILLATING
META
20,130,522
-0.000734
0.428571
0.020052
0.03659
OTHER
META
20,130,523
-0.000823
0.428571
0.019729
0.041701
OTHER
META
20,130,524
-0.001212
0.392857
0.019612
0.061819
OTHER
META
20,130,528
-0.00183
0.357143
0.019328
0.09468
OTHER
META
20,130,529
-0.002514
0.357143
0.020035
0.125475
OTHER
META
20,130,530
-0.002877
0.357143
0.021652
0.132881
OTHER
META
20,130,531
-0.003505
0.392857
0.021677
0.161679
OTHER
META
20,130,603
-0.004232
0.392857
0.02184
0.193753
OTHER
META
20,130,604
-0.004951
0.392857
0.021924
0.225807
OTHER
META
20,130,605
-0.005795
0.392857
0.022277
0.260145
OTHER
META
20,130,606
-0.00653
0.428571
0.022297
0.292846
OTHER
META
20,130,607
-0.00711
0.428571
0.021772
0.326559
OTHER
META
20,130,610
-0.007172
0.428571
0.023518
0.304956
OTHER
META
20,130,611
-0.007237
0.428571
0.022721
0.318516
OTHER
META
20,130,612
-0.007122
0.392857
0.022706
0.313647
OTHER
META
20,130,613
-0.007033
0.357143
0.01963
0.358276
OTHER
META
20,130,614
-0.00653
0.357143
0.019386
0.336855
OTHER
META
20,130,617
-0.00599
0.392857
0.01942
0.308455
OTHER
META
20,130,618
-0.005509
0.357143
0.019155
0.287602
OTHER
META
20,130,619
-0.005116
0.321429
0.019073
0.268212
OTHER
META
20,130,620
-0.004728
0.357143
0.019229
0.245887
OTHER
META
20,130,621
-0.004137
0.357143
0.01994
0.207482
OTHER
META
20,130,624
-0.003752
0.392857
0.020263
0.185177
OTHER
META
20,130,625
-0.0032
0.392857
0.020357
0.157194
OTHER
META
20,130,626
-0.002563
0.428571
0.020185
0.126971
OTHER
META
20,130,627
-0.001864
0.428571
0.020454
0.09113
OTHER
META
20,130,628
-0.001197
0.392857
0.020519
0.058332
OTHER
META
20,130,701
-0.000495
0.428571
0.020259
0.024414
OTHER
META
20,130,702
-0.000002
0.428571
0.020444
0.000099
OTHER
META
20,130,703
0.000517
0.464286
0.020177
0.025607
OSCILLATING
META
20,130,705
0.00088
0.5
0.020193
0.043594
OSCILLATING
META
20,130,708
0.00132
0.535714
0.019552
0.067516
OSCILLATING
End of preview. Expand in Data Studio

FAANG Stocks Historical Raw and Engineered Time-Series Dataset (2013-2025)

Since this is a comprehensive ReadMe file with multiple sections and crosslinks to other documents and images, I wanted to start by providing a ToC with hyperlinks to simplify navigation for the readers. (special thanks to @csavur for this very helpful suggestion!)

DOCUMENT NAVIGATION GUIDE (ToC)

1 - Summary
2 - Usage & Reproducability
3 - Practical Uses of this Dataset
4 - Organization of Data

5 - Engineered (Price) Feature Calculations
6 - Ground Truth Definitions and the "golden" Thresholds
7 - Ground Truth Label Assignment Logic
8 - Exploratory Data Analysis Insights
9 - Data Limitations/Bias
10 - References
11 - Citation

1 - SUMMARY

This is a time-series dataset on FAANG stock price changes between 1 January 2013 and 31 December 2025. FAANG includes the tech giants Facebook/Meta, Apple, Amazon, Netflix and Google. It was curated to bridge the gap between raw market data and ready-to-train features for financial ML models.

Data is provided in two variants:

1 - Raw: This is the daily OHLCV (csv) data that has been scraped using a python library and includes the adjusted prices for FAANG stock tickers. Here 'OHLCV' stands for Open, High, Low, Close and Volume information for each day.

2 - Engineered: This (csv) data is generated using the raw data and includes a set of carefully engineered features that can be used to extract various price patterns/regimes for FAANG stocks.

Following are the key advantages and differentiation this dataset offers:

1 - This dataset has already been sanitized and does not have any missing data. In addition, there are no inconsistencies in terms of data types or representations as it has gone through rigorous testing and verification.

2 - The raw dataset is based on real-world stock prices that have been scraped using yahoofinance library. There is no synthetic data in any part of this dataset.

3 - The dataset comes with a comprehensive exploratory data analysis to provide key insights into what the data is, what it can be used for as well as its strengths and potential areas of caution - all made available without having to type a single line of code for exploration!

4 - Besides the real-world raw data, a feature engineered ML-ready dataset (derived from the raw data) is also included. Creation of the engineered features as well as the assigned ground truth labels are also described with the mathematical and observational reasoning on the relevant stock market movements.

5 - Additional statistical concepts and formulation are also provided as part of this README file where needed without overwhelming the reader.

2 - USAGE & REPRODUCABILITY

You can download my dataset using the following Python script.

from datasets import load_dataset

# Following will load the (ML-ready) feature engineered dataset.
engineered_dataset = load_dataset("ML-Owl/faang-engineered-time-series-features-2013-2025", "engineered")

# Following will load the raw OHLCV dataset.
raw_dataset = load_dataset("ML-Owl/faang-engineered-time-series-features-2013-2025", "raw")

3 - PRACTICAL USES OF THIS DATASET

I created this dataset to develop various ML models using real FAANG data, and to use some of those models as part of my larger ML pipeline that will allow me to increase the quarterly and annual yield of my stocks to exceed 25%. Therefore, I see this dataset as one of the stepping stones to reach my overarching target of 25%+ total yield in my stock portfolio.

While training models using this dataset is not the silver bullet to maximizing your investment yields, such models can provide useful guidance in terms of answering questions as follows:
1 - Is a stock starting to manifest a strong upward trend?
2 - Is the downward trend I am seeing persistent or transitionary?
3 - Is the movement of a stock too ambiguous to invest currently?

That said, for those of you who do not intend to take risks in the stock market but only interested on the broader utility of this dataset for the ML community, this dataset will also help you with the following explorations:
1 - How well does a model trained on one technical giant, generalize to other with different volatility profiles? (e.g., train on Apple and test on Netflix or Google)

2 - How accurately can we identify market regime switching? (e.g. trend dominant switches to stationary and then to oscillating)

3 - How well do the engineered features provided in this dataset compare to the standard technical indicators? (e.g., RCI and MACD)

4 - With the rare trend and stationary events and dominating oscillating noise, this dataset provides an excellent real-world playground for experimenting with imbalanced learning techniques.

5 - Last but not least, as will be shown in the exploratory data analysis section of this document, separability of the downward trend is better than that of upward trends, which can help develop models that are reliably defensive (to limit losses).

4 - ORGANIZATION OF DATA

Following is the tree structure for this dataset repository.

faang-engineered-time-series-features-2013-2025/
├── data/
│   ├── raw
│   │   └── AAPL_20130101_20251231.csv
│   │   └── AMZN_20130101_20251231.csv
│   │   └── GOOG_20130101_20251231.csv
│   │   └── META_20130101_20251231.csv
│   │   └── NFLX_20130101_20251231.csv
│   │   └── metadata.json
│   └── engineered
│   │   └── feature_dataset.csv
│   │   └── feature_dataset_metadata.json
├── assets/
└── README.md

The README.md is this file you are currently viewing. The assets/ folder includes the image and additional markdown files (each linked to from the README.md document) that provide additional context to the material in this file. You will not need to use any of the files in the assets/ folder directly, and can safely ignore them.

Next let's look at the content of the raw/ and engineered/ subfolders inside the data/ folder in more detail.

4.1 - INSIDE THE raw/ SUBFOLDER

There are 5 *.csv file and a metadata.json file in this folder.

The metadata.json file is a high level summary (provided as a human readable reference only) of the *.csv files in the same folder and its content is shown below. Besides the start and the end dates (in YYYYMMDD format) the time-series data spans, the number of rows in each of the *.csv files is also provided under the "file_info" key.

# metadata.json file content
{
    "start_date": "20130101",
    "end_date": "20251231",
    "adjusted_prices": true,
    "file_info": {
        "META_20130101_20251231.csv": 3269,
        "AAPL_20130101_20251231.csv": 3269,
        "AMZN_20130101_20251231.csv": 3269,
        "NFLX_20130101_20251231.csv": 3269,
        "GOOG_20130101_20251231.csv": 3269
    }
}

The "adjusted_prices" key value is set to true, which simply indicates prices listed in each *.csv file are adjusted stock market prices. In case you are not familiar with the concept of "Adjusted price", it is simply the modified original price of a stock that eliminates the corporate action effects (e.g., stock splits) that change the price of a share that does not represent any change in the company's underlying market capitalization. From a dataset consistency and comparability perspective, working with the adjusted share prices (rather than the original share prices) for each stock avoids the overhead of compensating each price entry while working with each *.csv file, and hence makes the life of a data scientist much easier!

Each of the 5 *.csv file with OHLCV data corresponds to a specific stock, and includes 3269 rows and the following features (i.e., columns):

"Date" - Time information including year, month and day captured for each available trading day and provided in YYYYMMDD format.

"Close" - Adjusted closing price per share at the end of the day.

"High" - Adjusted highest price per share during the day.

"Low" - Adjusted lowest price per share during the day.

"Open" - Adjusted opening price per share on the day.

"Volume" - Volume of traded share during the day.

4.2 - INSIDE THE engineered/ SUBFOLDER

This folder includes 2 files: feature_dataset.csv and feature_dataset_metadata.json.

The feature_dataset_metadata.json file is a high level summary (provided as a human readable reference only) of the *csv files in the same folder and its content is shown below. This file includes all the information of the metadata.json and extends it by a new key called "features_info".

{
    "start_date": "20130101",
    "end_date": "20251231",
    "adjusted_prices": true,
    "file_info": {
        "META_20130101_20251231.csv": 3269,
        "AAPL_20130101_20251231.csv": 3269,
        "AMZN_20130101_20251231.csv": 3269,
        "NFLX_20130101_20251231.csv": 3269,
        "GOOG_20130101_20251231.csv": 3269
    },
    "features_info": {
        "days_per_feature": 30,
        "META": {
            "total_samples": 3240
        },
        "AAPL": {
            "total_samples": 3240
        },
        "AMZN": {
            "total_samples": 3240
        },
        "NFLX": {
            "total_samples": 3240
        },
        "GOOG": {
            "total_samples": 3240
        }
    }
}

The additional information in the json file (listed under key "features_info") includes a key called "days_per_feature" with a value 30. This means price driven features listed in the feature_dataset_metadata.csv file is calculated based on the price changes within a time window of 30 days. This time window is fixed and advances towards the future by one day with each row in the *.csv file. The last sentence will become clearer while describing the content of the feature_dataset_metadata.csv shortly.

Each remaining key under "features_info" besides "days_per_feature", provides the total number of rows (i.e., samples) for each stock ticker of interest. You will notice the number of samples per stock ticker listed under key "file_info" is 3269-3240 = 29 samples more than those under stock tickers listed under key "features_info", which is expected due to the 30-day buffer and the sliding window approach taken while calculating each sample in feature_dataset_metadata.csv.

Next, let's take a look at the content of the feature_dataset_metadata.csv file, which is where our engineered features live.

This file includes the following features (i.e., columns):

"ticker" - Name of the stock ticker the row data corresponds to.

"end_date" - This is the last date inside the 30-day window used for calculating the features on the same row. Provided in YYYYMMDD format.

"slope" - The slope of the line of best fit for the prices in the 30-day window. (More on this in the next section)

"zcr" - Short for "zero crossing rate". This describes how many times the price movement changes direction in a 30-day window. (More on this in the next section)

"volatility" - This feature is a measure of the degree of variation in a stock's returns over the 30-day time window. It represents the stability (or lack thereof) of the stock price returns. (More on this in the next section)

"trend_strength" - Measures whether the observed slope indicates an obvious upward or downward movement over the 30-day window. (More on this in the next section)

"gt" - This is the ground truth and includes 5 unique classes that describe the price regime in a 30-day window. (These classes and their descriptions will be provided in more detail)

5 - ENGINEERED (PRICE) FEATURE CALCULATIONS

Before diving into the details of how each price feature in my my dataset is calculated, keep in mind the "price" (unless stated otherwise) will always refer to the Adjusted Close Price of a stock. Depending on the feature, the price may be used in its raw or transformed form, as I describe each price feature in the rest of this section.

Important: While creating the features below, I paid specific attention in order not to make them dependent on the dataset itself, which would make generalization of ML models trained on this dataset problematic. The features (and the thresholds as you will see later) are defined to be applicable to general stock price movements, and are not specific to the FAANG stock price movements between 2013 and 2025 included in this dataset. This was perhaps the most time consuming part of the dataset preparation for me.

1 - "slope" - This refers to the slope of the linear fit to the prices in a 30-day window. However, rather than using the raw stock prices, I first take the log of each price, and then calculate the linear fit. Following are a few advantages of this approach:

  • Log prices make trend and volatility measures scale-invariant across different stocks.
  • Log prices also allow comparison between different timelines (where absolute price difference in the same stock may be vastly different, e.g., price in 2013 vs. 2025)
  • Log prices align naturally with log returns as well (as will be discussed shortly).
  • Using logs gives more stable statistical behavior for window-based classification.

As log of prices is used for the slope feature, it is important to keep the following points in mind:

1 - When using log of prices, keep in mind ln(0) is undefined. Therefore, you need to pre-filter zero dollar raw prices prior to applying log operator on them !! (though in this dataset there is no such instance, if you plan to use the log of price approach in another price dataset, keep this in mind)

2 - When using log of prices, the slope (b) will measure the relative drift rather than the absolute price drift of the stock.

3 - If you calculate the slope (b) based on log prices, it will approximately be equal to the (fixed) percentage change in price over time (see the proof here). This is regardless of the stock price, which allows direct comparison between different stocks in our solution.

2 - "trend_strength" (TS) - Trend strength (ratio) indicates whether the upward or the downward (slope) trend of the 30-day frame is a strong or a weak one. This metric is calculated as shown below where the numerator is the slope magnitude of the line of best fit and the denominator is the standard deviation of all the samples in the 30-day frame.

trend strength

Caution: Since I am using the |b| based on the log of prices in the 30-day frame, the standard deviation also has to be calculated using the log of prices in that same 30-day frame for consistency!

3 - "volatility" - In this dataset I use volatility as the standard deviation of daily returns, which measures the amount of fluctuation relative to the average return behavior. For those of you who may be wondering why I chose to work with the returns rather than the raw prices, following are a few of the advantages of doing so:

  • Returns remove scale (e.g., a $10 stock can be compared to another at $500)
  • They stabilizes variance
  • They capture relative movement instead of absolute price level

I first calculate the simple daily return (rt) between day 't-1' and 't' (where 'x' is the raw price) as follows:

simple return

Once we calculate all simple returns for the 30-day window (i.e., 29 simple returns in total), we can calculate the mean and the standard deviation for the same window. Here, the standard deviation of returns in a 30-day window is my volatility feature and is calculated as follows:

volatility

One minor tweak I made to the above calculation is rather than calculating rt as simple returns (as described above), I calculate it as a log ratio as shown below. The volatility calculation simply uses those log ratio returns.

log_return

Following are the reasons why I decided to use the log ratios rather than the simple returns:

  • Log returns preserve total movement across a window exactly (I provide an example of what I mean here)
  • Simple returns are NOT symmetrical while log returns are - due to the mathematical property of logarithms: ln(a) = -ln(1/a) (Check out my clarifying example here)
  • If your dataset includes stock tickers with very different price levels, using log returns will allow you to compare them fairly and meaningfully.

In short, volatility becomes a cleaner measure of "movement intensity" when log returns are used instead of simple returns!

4 - "zcr" - The zero crossing rate quantifies how often the prices change direction in a 30-day window (e.g., move up and then down and then up again etc.) Therefore, ZCR as a feature is a strong indicator of an oscillating price pattern.

Though the name of this feature suggests a movement around a zero baseline from positive to negative and vice versa, zcr is calculated using the "sign changes" of how the price of a stock is moving. Let me explain how zcr is calculated starting from a set of 4 raw prices as an example below:

raw daily prices: x1, x2, x3, x4
price differences: (x2-x1), (x3-x2), (x4-x3)
sign of differences: s1 = sign(x2-x1), s2 = sign(x3-x2), s3 = sign(x4-x3)
sign changes: (s1?= s2), (s2 ?= s3)

Therefore, if we have 'n' raw prices in a time window, we will have 'n-1' price differences, and 'n-2' sign changes to count as 'the zero crossing count' as shown below:

zero crossing count

Once we have the zero crossing count, we divide it by the total number of comparisons to calculate the zcr as shown below:

zero crossing rate

The zcr feature will vary between 0 and 1 where zcr=0 means the price trend never changes direction, while zcr=1 indicates an extreme oscillatory behavior.

Caution: If price does not change from one day to the next, the sign() operator will return 0. As a result, calculating a sign difference will not be possible. For such corner cases, ignoring the transitions involving 0 is an obvious approach. However, I use an alternative and method that keeps the zcr calculation stable in such cases. You can read about my preferred approach here.

6 - GROUND TRUTH DEFINITIONS AND THE "GOLDEN" THRESHOLDS

The ground truth labels defined in this dataset are multi-class, and are designed to categorize the price movements in a 30-day window to identify a regime the stock is in. Next, let's look at the 5 different classes in this dataset and provide a description for each.

1 - TREND_UP: A 30-day profile will belong to this class if there is a strong upward price gain trend without extreme volatility.

2 - TREND_DOWN: This label refers to the opposite trend to TREND_UP, i.e., a strong downward price gain trend.

3 - STATIONARY: 30-day price gain profiles that neither indicate a strong upward or downward slope without extreme volatility

4 - OSCILLATING: When a 30-day profile has too much upward and downward variation (i.e., there is too many stock price direction flips within the window), it will be assigned to this class.

5 - OTHER: If a 30-day price profile does not fall into any of the above 4 classes, it will be assigned this label.

In order to assign a 30-day price window into one of the above 5 classes, we will need use the 4 price features (I described earlier), and a set of thresholds, which we have not defined yet. These thresholds are called the "golden thresholds" because they are used to deterministically create the ground truth information in our dataset. Therefore, if we create an algorithm (called the "oracle") that knows these thresholds and the logic used to make the class assignments for each 30-day window, that oracle algorithm would be able to predict the ground truth labels for our dataset with 100% accuracy. However, the challenge for a machine learning classifier (trained on my dataset) will be to predict the labels without any knowledge of the golden thresholds and the label assignment logic that uses those thresholds. No machine learning model is likely to be a match for the oracle in terms of accuracy, of course!

Next, we will take a close look at how the golden thresholds are defined/selected, and will follow that up with the logic I used to make the label assignments in this dataset.

1 - Trend direction threshold: I previously mentioned the slope (b) of line of best fit is an approximation of percentage price growth per day in a given 30-day window. (see my explanation here)

Empirically, in the US equity markets (also backed up by the "Drift Burst Hypothesis" in [1]), daily drift/change above 0.3% is considered trend-dominant. Therefore, we can define the following thresholds to identify strong upward/downward trends:

  • strong upward trend if b ≥ 0.003
  • strong downward trend if b ≤ -0.003

Note we are able to define the thresholds symmetrically (i.e., 0.003 and -0.003) because log-price slopes are symmetric! (see my explanation here)

2 - Near-zero slope threshold: This threshold is the level below which a daily drift direction becomes visually and statistically indistinguishable from noise over a 30-day window.

Major asset managers often classify "low volatility" or "stable" funds as those with monthly fluctuations in the 2-4% range. Investopedia and similar financial education platforms define "Market Noise" as price data that doesn't change the underlying trend and specify it to be 3% or less. I also included a research paper [2] that quotes similar levels for low volatility. Therefore, for consistency, I chose 30-day price changes up to 3% as movements as random and near-stationary.

If we calculate the daily change based on the 3% monthly price change as reference, following two thresholds can be defined (see here for the calculation method):

  • a near flat slope if |b| ≤ 0.001
  • a non-flat slope if |b| > 0.001

Note: The two thresholds defined on the slope so far, create two bands between 0.001 < band 1 < 0.003 and -0.001 > band 2 > -0.003 that are not classified and is referred to as the "drift" region, which is treated as a transitional section as we will see in the description of the class allocation logic later on in this document.

3 - Trend strength threshold: I already described the calculation of the "trend_strength" (TS) feature in this dataset as a ratio of daily drift and the daily gain movement. One way to interpret TS is as a "daily signal-to-noise ratio" (SNR) to help define our threshold. Any SNR lower than 2 typically gives ambiguous shapes caused by sign and magnitude ambiguities. Therefore, with SNR=2 we can calculate two TS thresholds as follows:

  • strong upward/downward trend if TS ≥ 0.36
  • weak/no trend if TS < 0.36

For those of you interested in understanding the calculation for the above on this page where I describe the details.

In the rest of this section, I will provide 4 examples of price movements captured from this dataset to visually justify choosing SNR=2 for calculating the above thresholds.

You can see 4 plots of 5 different price movements over a 30-day period directly captured from this dataset. The price axes on all plots span the same range to allow a direct comparison. The corresponding SNR as calculated here, value for each price movement is included as title on each plot.

trend strength SNRs

SNR=0.25 corresponds to the weakest trend sample in this example. As you can see, the price movement over the 30-day period is almost perfectly flat. The second plot with SNR=1.05 appears to have a slight downward trend (compared to SNR=0.25). However, it is difficult to discern this trend due to the volatility of the price. The remaining two examples both have SNR > 2.0, and clearly indicate strong upward trend. One interesting thing to note is although the upward movement in SNR=2.48 plot visually appears to be weaker than the SNR=2.12 plot, the upward movement is more consistent due to significantly less price noise over the 30-day period. This reduced noise yields a higher SNR for the last plot in this example.

Therefore, using SNR=2.0 to calculate the trend strength threshold is a reasonable choice that also aligns with visual observations of the price movements in our dataset.

Suggestion: Though I used SNR=2.0 for calculating my golden trend strength threshold in this dataset, you may wish to experiment with looser or more conservative SNR limits as an extension of this work.

4 - Oscillation threshold: This threshold is designed to inform if a price movement over a 30-day window is in an oscillatory regime. For the oscillation calculation, we use the number of zero crossing rate (zcr) I explained as a feature earlier.

A strong oscillation will be manifested by a high number of zcr (i.e., frequent direction reversals in the price change over a period) In order to define our threshold, we need to compute a set of probabilities related to categorizing an oscillatory or non-oscillatory price movements correctly.

You can find my detailed probability calculations here that explain where the "magic numbers" in the result below come from. For those of you who are just interested in the final threshold value, please continue reading.

Within a 30-day window (N=30), the maximum number of price direction changes can be 28 (N-2), as we saw earlier. We choose the probability of falsely categorizing a trend window as oscillatory as 30% (which is low but can be set even lower at the expense of missing true trend windows with less price direction change stability!). We also set a target false positive rate of 5%, which defines the allowed margin for calling valid trend windows as oscillatory. For these defined targets, any window with zero crossing count of 13 or more will marked as oscillatory. To sum up, following is the criterion for our oscillation threshold.

  • oscillating window if zcr ≥ 13/28 = 0.46
  • non-oscillatory window if zcr < 0.46

Suggestion: Feel free to experiment with different oscillation thresholds based on my approach described above and compare your findings.

5 - Volatility threshold: This threshold is defined for two different regimes:

- Low noise: This is the upper limit of the quiet region where noise is small and "stationary" is meaningful.
- High noise: This is the lower limit of the high-noise region where gain fluctuations are so large that slope-based trend labels become unreliable.

I provide this additional explanation that shows the math behind the volatility threshold calculations. For those of you who want to cut to the chase and jump directly to the final thresholds used in this dataset, please continue reading.

The high and low volatility regions in a 30-day window are defined based on visual perception. When the total fluctuations observed in any given 30-day window fall below the 4% to 5% range, this period is said to have low volatility. On the contrary, if the fluctuations exceed the 10% to 12% range, the regime is categorized as high volatility. The band that falls between these two is called the "medium volatility" region.

Recall from the volatility feature definition that volatility based on daily log returns (rt) is calculated as follows:

volatility

Therefore, using the low and high volatility limits defined above, following volatility thresholds are used in this dataset:

  • low volatility or stationary window if σr ≤ 0.008
  • high volatility or no trend window if σr ≥ 0.02

Suggestion: Feel free to experiment with different low and high volatility limits and thresholds calculated on those new limits. Observe the effect of narrowing and widening the medium volatility band on the ground truth label definitions in the dataset.

7 - GROUND TRUTH LABEL ASSIGNMENT LOGIC

We now have all the "golden threshold definitions", which means we can now list all 5 of the labeling rules used for generating ground truth data in this dataset. The rules listed below MUST BE APPLIED IN THE ORDER THEY ARE LISTED to successfully create all the labels in this dataset.

RULE 1 - STATIONARY class/label assignment: A frame is classified as STATIONARY if it is both flat and quiet.

Criteria:

  • |b| < 0.001 i.e., flat/near flat characteristic
  • AND σr <= 0.008 i.e., low volatility
  • AND zcr < 0.46 i.e., weak/no oscillations

RULE 2 - OSCILLATING class/label assignment: A frame is classified as OSCILLATING if it flips direction frequently and does not have a strong trend. Criteria:

  • zcr >= 0.46 i.e., strong oscillations
  • AND TS < 0.36 i.e., no strong trend
  • AND σr > 0.008 i.e., above the low volatility threshold to ignore tiny jitters as oscillations. Medium oscillations are also counted here.

RULE 3 - TREND_UP class/label assignment: A frame is classified as TREND_UP if it has a positive slope and trend dominates noise.

Criteria:

  • b >= 0.003 i.e., strong positive slope
  • AND TS >= 0.36 i.e., strong trend
  • AND zcr < 0.46 i.e., guard against oscillatory frames
  • AND σr < 0.02 i.e., guard against very noisy "trends". Medium volatility is ignored in order not to miss trends with some fluctuation.

RULE 4 - TREND_DOWN class/label assignment: A frame is classified as TREND_DOWN if it has a negative slope and trend dominates noise.

Criteria:

  • b <= -0.003 i.e., strong negative slope
  • AND TS >= 0.36 i.e., strong trend
  • AND zcr < 0.46 i.e., guard against oscillatory frames
  • AND σr < 0.02 i.e., guard against very noisy "trends". Medium volatility is ignored in order not to miss trends with some fluctuation.

RULE 5 - OTHER class/label assignment: If a frame does not fall into any of the above 4 classes, it is classified as OTHER. This class is expected to capture the following cases:

  • moderate slopes (i.e., not extreme enough for trends)
  • moderate volatility with low/medium zero crossings
  • mixed behavior (i.e., trend + oscillations)
  • transitional regimes

8 - EXPLORATORY DATA ANALYSIS INSIGHTS

I would like to conclude by presenting a few results from my exploratory data analysis, which will help in terms of deciding how to move forward with your ML model development. Please note this is not an exhaustive EDA report, and you can add your own analyses to my report (including label noise analysis, a naive classifier performance analysis, leave-one-ticker-out (LOTO) stress tests etc.)

1 - Numerical column analysis: Includes key statistics and the distribution plots for the numerical columns in this dataset.

2 - Categorical column analysis: High level statistics of the two categorical columns in this dataset are provided including ground truth labels as well as the stock tickers.

3 - Correlation analysis: Pearson and Spearman correlation matrices are presented and the findings are interpreted from practical price movement and ML model development perspectives.

4 - Confusion risk analysis: This analysis evaluates the level of overlap between pairs of classes/labels where a reasonable classifier is likely to confuse them.

5 - Class frequency drift: This is also called the "Regime Stability Analysis", and its scope is to see whether the proportion of each class (OSCILLATING, OTHER, TREND_UP, STATIONARY, TREND_DOWN) remains stable across eras (between 2013 and 2025), or it changes materially over time.

6 - Feature drift: Looks at whether the distribution of any of the features change materially over time. This analysis does not provide an answer to 'why' it changes or whether it is likely to hurt the models. We are simply trying to see whether there is a change and its characteristic.

9 - DATA LIMITATIONS/BIAS

  • This dataset only includes FAANG stocks and may not generalize to small-cap stocks or different market regimes. However, you can explore the level of such generalizability as an academic exercise.
  • Only FAANG stock data between 2013 and 2025 are included in this dataset. These include companies that remained successful in that era. Companies that may have been delisted or faced bankruptcy during the macroeconomic shifts in thgat era are excluded.
  • Only Adjusted Close price movements are used while creating this dataset. Therefore, intra-day volatility is not included in this dataset providing a "smoothed view" of market activity.
  • This dataset is imbalanced with OSCILLATING and OTHER classes dominating while TREND_UP, TREND_DOWN and STATIONARY classes have rare occurence. (see categorical column analysis for more details)

10 - REFERENCES

[1] - Christensen, Kim and Oomen, Roel C.A. and Renò, Roberto, The Drift Burst Hypothesis (August 1, 2018). Available at SSRN: https://ssrn.com/abstract=2842535 or http://dx.doi.org/10.2139/ssrn.2842535

[2] - Arief, Usman and Baur, Dirk G. and Smales, Lee A., Estimating Volatility Thresholds for "No News Is Good News" (December 20, 2025). Available at SSRN: https://ssrn.com/abstract=5956234 or http://dx.doi.org/10.2139/ssrn.5956234

11 - CITATION

If you use this dataset in your research or project, please cite it in one of the formats below:

APA Format:
Tanriover, C. [ML-Owl]. (2026). FAANG Engineered Time-Series Features (2013-2025) [Data set]. Hugging Face. https://huggingface.co/datasets/ML-Owl/faang-engineered-time-series-features-2013-2025

BibTeX Format:

@misc{faang_engineered_features_2026,
  author = {Tanriover, Cagri { (ML-Owl)}},
  title = {FAANG Engineered Time-Series Features (2013-2025)},
  year = {2026},
  publisher = {Hugging Face},
  journal = {Hugging Face Hub},
  howpublished = {\url{[https://huggingface.co/datasets/ML-Owl/faang-engineered-time-series-features-2013-2025](https://huggingface.co/datasets/ML-Owl/faang-engineered-time-series-features-2013-2025)}},
}
Downloads last month
201