tksbrokerapi.TradeRoutines
Technologies · Knowledge · Science
The TradeRoutines library contains some methods used by trade scenarios implemented with TKSBrokerAPI module.
- TKSBrokerAPI module documentation: https://tim55667757.github.io/TKSBrokerAPI/docs/tksbrokerapi/TKSBrokerAPI.html
- TKSBrokerAPI CLI examples: https://github.com/Tim55667757/TKSBrokerAPI/blob/master/README_EN.md
- About Tinkoff Invest API: https://tinkoff.github.io/investAPI/
- Tinkoff Invest API documentation: https://tinkoff.github.io/investAPI/swagger-ui/
- Open account for trading: https://tinkoff.ru/sl/AaX1Et1omnH
SI-constant: NANO = 10^-9
Universal Fuzzy Scale is a special set of fuzzy levels: {Min, Low, Med, High, Max}
.
Level names on Universal Fuzzy Scale FUZZY_SCALE
. Default: ["Min", "Low", "Med", "High", "Max"]
.
Signal filter by default is used to reduce signal strength.
Opening positions rules depend of fuzzy Risk/Reach levels.
This is the author's technique, proposed by Timur Gilmullin and Mansur Gilmullin, based on fuzzy scales for measuring the levels of fuzzy risk and reachable. The following simple diagram explains what do we mean as Open/Close Fuzzy Rules:
Here in table T
mean True
, F
mean False
. 1st position is for Opening rules, 2nd position is for Closing rules.
These rules are defined as transposed matrix constants OPENING_RULES
and CLOSING_RULES
.
See also:
- An Engineering View of Trading: How a Trading Robot's Signal Algorithm Works: RU.
- FuzzyRoutines library.
- How to work with Universal Fuzzy Scales: EN, RU.
CLOSING_RULES
constant,CanOpen()
andCanClose()
methods.
Default rules for opening positions:
Risk \ Reach | Min | Low | Med | High | Max |
---|---|---|---|---|---|
Min | False | False | True | True | True |
Low | False | False | True | True | True |
Med | False | False | True | True | True |
High | False | False | False | False | False |
Max | False | False | False | False | False |
Closing positions rules depend of fuzzy Risk/Reach levels. These rules are opposite for OPENING_RULES
(see explanation there what do we mean as Open/Close Fuzzy Rules).
See also: CanClose()
method.
Default rules for closing positions:
Risk \ Reach | Min | Low | Med | High | Max |
---|---|---|---|---|---|
Min | False | False | False | False | False |
Low | False | False | False | False | False |
Med | True | True | False | False | False |
High | True | True | True | False | False |
Max | True | True | True | False | False |
Checks opening positions rules in OPENING_RULES
depend on fuzzy Risk/Reach levels.
See also:
OPENING_RULES
constant,FUZZY_LEVELS
andFUZZY_SCALE
constants,RiskLong()
andRiskShort()
methods,ReachLong()
andReachShort()
methods.
Parameters
- fuzzyRisk: Fuzzy Risk level name.
- fuzzyReach: Fuzzy Reach level name.
Returns
Bool. If
True
, then possible to open position.
Checks closing positions rules in CLOSING_RULES
depend on fuzzy Risk/Reach levels.
See also:
CLOSING_RULES
constant,FUZZY_LEVELS
andFUZZY_SCALE
constants,RiskLong()
andRiskShort()
methods,ReachLong()
andReachShort()
methods.
Parameters
- fuzzyRisk: Fuzzy Risk level name.
- fuzzyReach: Fuzzy Reach level name.
Returns
Bool. If
True
, then possible to close position.
Function returns Risk as fuzzy level and percents of Risk in the range [0, 100], if you want buy from current price.
This is the author's method, proposed by Timur Gilmullin and Mansur Gilmullin, based on fuzzy scales for measuring the levels of Fuzzy Risk. The following simple diagram explains what do we mean as Fuzzy Risk level:
- If open long (buy) position from current price:
RiskLong = Fuzzy(|P - L| / (H - L))
. Here:P
is the current price,L (H)
is the lowest (highest) price in forecasted movements of candles chain or prognoses diapason border of price movement,Fuzzy()
is the fuzzyfication function that convert real values to its fuzzy representation.
See also:
- An Engineering View of Trading: How a Trading Robot's Signal Algorithm Works: RU.
- FuzzyRoutines library.
- How to work with Universal Fuzzy Scales: EN, RU.
RiskShort()
method,CanOpen()
andCanClose()
methods.ReachLong()
andReachShort()
methods.
Parameters
- curPrice: Current actual price (usually the latest close price).
- pHighest: The highest close price in forecasted movements of candles chain or prognosis of the highest diapason border of price movement.
- pLowest: The lowest close price in forecasted movements of candles chain or prognosis of the lowest diapason border of price movement.
Returns
Dictionary with Fuzzy Risk level and Risk percents, e.g.
{"riskFuzzy": "High", "riskPercent": 66.67}
.
Function returns Risk as fuzzy level and percents of Risk in the range [0, 100], if you want sell from current price.
This method is opposite for RiskLong()
(see explanation there what do we mean as Fuzzy Risk).
- If open short (sell) position from current price:
RiskShort = Fuzzy(|P - H| / (H - L))
. Here:P
is the current price,L (H)
is the lowest (highest) price in forecasted movements of candles chain or prognoses diapason border of price movement,Fuzzy()
is the fuzzyfication function that convert real values to its fuzzy representation.
Parameters
- curPrice: Current actual price (usually the latest close price).
- pHighest: The highest close price in forecasted movements of candles chain or prognosis of the highest diapason border of price movement.
- pLowest: The lowest close price in forecasted movements of candles chain or prognosis of the lowest diapason border of price movement.
Returns
Dictionary with Fuzzy Risk level and Risk percents, e.g.
{"riskFuzzy": "Low", "riskPercent": 20.12}
.
The Fuzzy Reach is a value of forecast reachable of price (highest or lowest close). In this function we calculate the reachability of the highest close price.
This is the author's method, proposed by Timur Gilmullin and Mansur Gilmullin, based on fuzzy scales for measuring the levels of Fuzzy Reach. The following simple diagram explains what is meant by the Fuzzy Reach level:
There are fuzzy levels and percents in the range [0, 100] for the maximum and minimum forecasted close prices. Prognosis horizon is divided by 5 parts of time diapason: I, II, III, IV and V, from the first forecasted candle (or price) to the last forecasted candle (or price).
Every part correlate with some fuzzy level, depending on the distance in time from current actual candle (or price):
I = Max
,II = High
,III = Med
,IV = Low
,V = Min
.
This function search for first fuzzy level appropriate to the part of time diapason, where the close price (highest or lowest) located in this diapason. Of course, you can use other price chains (open, high, low) instead of candle close prices, but usually this is not recommended.
Recommendation. If you have no prognosis chain of candles just use "Med"
Fuzzy Reach level.
See also:
- An Engineering View of Trading: How a Trading Robot's Signal Algorithm Works: RU.
OPENING_RULES
andCLOSING_RULES
constants,CanOpen()
andCanClose()
methods,FUZZY_LEVELS
andFUZZY_SCALE
constants,RiskLong()
andRiskShort()
methods,ReachShort()
method.
Parameters
- pClosing: Pandas Series with prognosis chain of closing prices of candles. This is "close prices" in OHLCV-formatted candles chain. The forecasted prices are indexed starting from zero, this is the first candle of the forecast. The last price of the forecast is the "farthest" relative to the current actual close price.
Returns
Dictionary with Fuzzy Reach level and Reach percents for the highest close price, e.g.
{"reachFuzzy": "Low", "reachPercent": 20.12}
.
The Fuzzy Reach is a value of forecast reachable of price (highest or lowest close). This method is similar like
ReachLong()
(see explanation there what do we mean as Fuzzy Reach), but in this case we calculate the reachability
of the lowest close price.
Parameters
- pClosing: Pandas Series with prognosis chain of closing prices of candles. This is "close prices"
in OHLCV-formatted candles chain. The forecasted prices are indexed starting from zero,
this is the first candle of the forecast. The last price of the forecast is the "farthest"
relative to the current actual close price. Recommendation. If you have no prognosis chain
of candles just use
"Med"
Fuzzy Reach level.
Returns
Dictionary with Fuzzy Reach level and Reach percents for the lowest close price, e.g.
{"reachFuzzy": "High", "reachPercent": 66.67}
.
Format a timedelta object into a readable string like "0:00:12" or "0:00:12.34".
This function supports fixed-point precision formatting for seconds. Fractional seconds are rounded to the specified number of digits. No brackets are added — raw string like "H:MM:SS.ss" is returned.
If the precision is invalid (not an int or out of range [0..6]), the function will return the default string representation of the timedelta.
Examples:
FormatTimedelta(timedelta(seconds=12.987), precision=0) -> "0:00:12" FormatTimedelta(timedelta(seconds=12.987), precision=1) -> "0:00:12.9" FormatTimedelta(timedelta(seconds=12.987), precision=2) -> "0:00:12.99" FormatTimedelta(timedelta(seconds=12.987), precision=5) -> "0:00:12.98700" FormatTimedelta(timedelta(seconds=12.987), precision="bad") -> "0:00:12.987000"
Parameters
- timeDelta: Timedelta object to format.
- precision: Integer from 0 to 6 — how many digits to keep after seconds.
Returns
Formatted time string like "0:00:12" or "0:00:12.34". If precision is invalid, returns str(timeDelta).
Create tuple of date and time strings with timezone parsed from user-friendly date.
Warning! All dates must be in UTC time zone!
User dates format must be like: "%Y-%m-%d"
, e.g. "2020-02-03"
(3 Feb, 2020).
Output date is UTC ISO time format by default: "%Y-%m-%dT%H:%M:%SZ"
.
Example input: start="2022-06-01", end="2022-06-20"
-> output: ("2022-06-01T00:00:00Z", "2022-06-20T23:59:59Z")
.
An error exception will occur if input date has incorrect format.
If start=None
, end=None
then return dates from yesterday to the end of the day.
If start=some_date_1
, end=None
then return dates from some_date_1
to the end of the day.
If start=some_date_1
, end=some_date_2
then return dates from start of some_date_1
to end of some_date_2
.
Start day may be negative integer numbers: -1
, -2
, -3
— how many days ago.
Also, you can use keywords for start if end=None
:
today
(from 00:00:00 to the end of current day),yesterday
(-1 day from 00:00:00 to 23:59:59),week
(-7 day from 00:00:00 to the end of current day),month
(-30 day from 00:00:00 to the end of current day),year
(-365 day from 00:00:00 to the end of current day).
Parameters
- start: start day in format defined by
userFormat
or keyword. - end: end day in format defined by
userFormat
. - userFormat: user-friendly date format, e.g.
"%Y-%m-%d"
. - outputFormat: output string date format.
Returns
tuple with 2 strings
("start", "end")
. Example of return is("2022-06-01T00:00:00Z", "2022-06-20T23:59:59Z")
. Second string is the end of the last day. Tuple ("", "") returned if errors occurred.
Convert number in nano-view mode with string parameter units
and integer parameter nano
to float view.
Examples:
NanoToFloat(units="2", nano=500000000) -> 2.5
NanoToFloat(units="0", nano=50000000) -> 0.05
Parameters
- units: integer string or integer parameter that represents the integer part of number
- nano: integer string or integer parameter that represents the fractional part of number
Returns
float view of number. If an error occurred, then returns
0.
.
Convert float number to nano-type view: dictionary with string units
and integer nano
parameters {"units": "string", "nano": integer}
.
Examples:
FloatToNano(number=2.5) -> {"units": "2", "nano": 500000000}
FloatToNano(number=0.05) -> {"units": "0", "nano": 50000000}
Parameters
- number: float number.
Returns
nano-type view of number:
{"units": "string", "nano": integer}
. If an error occurred, then returns{"units": "0", "nano": 0}
.
This method get config as dictionary (preloaded from YAML file) and apply key: value
as names of class fields and
values of class fields. Example for class TradeScenario
:
config["tickers"] = ["TICKER1", "TICKER2"] ==> TradeScenario(TinkoffBrokerServer).tickers = ["TICKER1", "TICKER2"]
.
Parameters
- instance: instance of class to parametrize.
- **params: dict with all parameters in
key**: value
format. It will be nothing with object if an error occurred.
Gets input list and try to separate it by equal parts of elements.
Examples:
SeparateByEqualParts([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], parts=2) -> [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]
SeparateByEqualParts([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], parts=2, union=True) -> [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9, 10]]
SeparateByEqualParts([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], parts=2, union=False) -> [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10]]]
SeparateByEqualParts([1, 2, 3], parts=2, union=True) -> [[1], [2, 3]]
SeparateByEqualParts([1, 2, 3], parts=2, union=False) -> [[1], [2], [3]]
If parts > length of elements:
SeparateByEqualParts([1], parts=2, union=True) -> [[1]]
SeparateByEqualParts([1, 2, 3], parts=4, union=True) -> [[1], [2], [3]]
SeparateByEqualParts([1], parts=2, union=False) -> [[1], []]
SeparateByEqualParts([1, 2, 3], parts=4, union=False) -> [[1], [2], [3], []]
Parameters
- elements: list of objects.
- parts: int, numbers of equal parts of objects.
- union: bool, if True and if the remainder of the separating not empty, then remainder part union with the last part.
Returns
list of lists with equal parts of objects. If an error occurred, then returns empty list
[]
.
Calculates maximum lots for deal depends on current price and volume of instrument in one lot.
Formula: lots = maxCost // (currentPrice * volumeInLot)
, it means max count of lots, for which will be:
cost = lots * currentPrice * volumeInLot <= maxCost
.
If costOneLot = currentPrice * volumeInLot > maxCost
, then returned lots = 1
.
If an error occurred then returned lots = 0
.
Parameters
- currentPrice: the current price of instrument, >= 0.
- maxCost: the maximum cost of all lots of instrument in portfolio, >= 0.
- volumeInLot: volumes of instrument in one lot, >= 1.
Returns
integer number of lots, >= 0.
Outlier Detection with Hampel Filter. It can detect outliers based on a sliding window and counting difference between median values and input values of series. The Hampel filter is often considered extremely effective in practice.
For each window, we calculate the Median and the Median Absolute Deviation (MAD). If the considered observation differs from the window median by more than sigma standard deviations multiple on scaleFactor, then we treat it as an outlier.
Let Xi — elements of input series in the i-th window, s — sigma, the number of standard deviations, k — scale factor, depend on distribution (≈1.4826 for normal).
How to calculate rolling MAD: MAD(Xi) = Median(|x1 − Median (Xi)|, ..., |xn − Median(Xi)|)
What is an anomaly: A = {a | |a − Median (Xi)| > s ∙ k ∙ MAD(Xi)}
References:
- Gilmullin T.M., Gilmullin M.F. How to quickly find anomalies in number series using the Hampel method:
- Lewinson Eryk. Outlier Detection with Hampel Filter. September 26, 2019.
- Hancong Liu, Sirish Shah and Wei Jiang. On-line outlier detection and data cleaning. Computers and Chemical Engineering. Vol. 28, March 2004, pp. 1635–1647.
- Hampel F. R. The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69, 382–393, 1974.
Examples:
HampelFilter([1, 1, 1, 1, 1, 1], window=3) -> pd.Series([False, False, False, False, False, False])
HampelFilter([1, 1, 1, 2, 1, 1], window=3) -> pd.Series([False, False, False, True, False, False])
HampelFilter([0, 1, 1, 1, 1, 0], window=3) -> pd.Series([True, False, False, False, False, True])
HampelFilter([1], window=3) -> pd.Series([False])
HampelFilter([5, 5, 50, 5, 5], window=2) -> pd.Series([False, False, True, False, False])
HampelFilter([100, 1, 1, 1, 1, 100], window=2) -> pd.Series([True, False, False, False, False, True])
HampelFilter([1, 1, 10, 1, 10, 1, 1], window=2) -> pd.Series([False, False, True, False, True, False, False])
Parameters
- series: Pandas Series object with numbers in which we identify outliers.
- window: length of the sliding window (5 points by default), 1 <= window <= len(series).
- sigma: sigma is the number of standard deviations which identify the outlier (3 sigma by default), > 0.
- scaleFactor: constant scale factor (1.4826 by default for Gaussian distribution), > 0.
Returns
Pandas Series object with True/False values.
True
means that an outlier detected in that position of input series. If an error occurred, then empty series returned.
Anomaly Detection function using Hampel Filter. This function returns the minimum index of elements in an anomaly list
or index of the first maximum element in the input series if this index is less than anomaly element index.
If the series has no anomalies, then None
will be returned.
Anomaly filter is a function: F: X → {True, False}. F(xi) = True, if xi ∈ A; False, if xi ∉ A, where X — input series with xi elements, A — anomaly set.
References:
- Gilmullin T.M., Gilmullin M.F. How to quickly find anomalies in number series using the Hampel method. December 27, 2022.
- Jupyter Notebook with examples:
- Simple Python script demonstrates how to use Hampel Filter to determine anomalies in time series:
Examples:
HampelAnomalyDetection([1, 1, 1, 1, 1, 1]) -> None
HampelAnomalyDetection([1, 1, 1, 1, 111, 1]) -> 4
HampelAnomalyDetection([1, 1, 10, 1, 1, 1]) -> 2
HampelAnomalyDetection([111, 1, 1, 1, 1, 1]) -> 0
HampelAnomalyDetection([111, 1, 1, 1, 1, 111]) -> 0
HampelAnomalyDetection([1, 11, 1, 111, 1, 1]) -> 1
HampelAnomalyDetection([1, 1, 1, 111, 99, 11]) -> 3
HampelAnomalyDetection([1, 1, 11, 111, 1, 1, 1, 11111]) -> 2
HampelAnomalyDetection([1, 1, 1, 111, 111, 1, 1, 1, 1]) -> 3
HampelAnomalyDetection([1, 1, 1, 1, 111, 1, 1, 11111, 5555]) -> 4
HampelAnomalyDetection([9, 13, 12, 12, 13, 12, 12, 13, 12, 12, 13, 12, 12, 13, 12, 13, 12, 12, 1, 1]) -> 0
HampelAnomalyDetection([9, 13, 12, 12, 13, 12, 1000, 13, 12, 12, 300000, 12, 12, 13, 12, 2000, 1, 1, 1, 1]) -> 0
Some **kwargs parameters you can pass to HampelFilter()
:
window
is the length of the sliding window (5 points by default), 1 <= window <= len(series).sigma
is the number of standard deviations which identify the outlier (3 sigma by default), > 0.scaleFactor
is the constant scale factor (1.4826 by default), > 0.
Parameters
- series: List or Pandas Series of numeric values to check for anomalies.
- compareWithMax: If
True
(default), returns min(index of anomaly, index of first maximum). IfFalse
, returns only the first anomaly index detected byHampelFilter
. - kwargs: Additional parameters are forwarded to
HampelFilter()
.
Returns
Index of the first anomaly (or intersection with maximum, if enabled). Returns
None
if no anomaly is found.
Calculates the adaptive target cash reserve based on current and historical portfolio drawdowns.
This function dynamically adjusts the reserve allocated for averaging positions (e.g., during drawdowns). If the drawdown increases for several consecutive iterations, the reserve is amplified exponentially. If the drawdown stabilizes or decreases, the reserve is reset to the base level.
The amplification is computed as: amplification = amplificationFactor × exp(growStreak × amplificationSensitivity)
Where:
growStreak
is the number of consecutive days the drawdown has been increasing, including the current day.amplificationFactor
is the base multiplier.amplificationSensitivity
controls how aggressively the amplification grows with each additional drawdown increase.
Example:
With amplificationFactor = 1.25 and amplificationSensitivity = 0.1, if the drawdown increases for 3 days: amplification = 1.25 × exp(0.3) ≈ 1.25 × 1.3499 ≈ 1.687
Parameters
- drawdowns: historical portfolio drawdowns (fractions between 0 and 1); drawdowns[0] is the oldest, drawdowns[-1] is the most recent.
- curDrawdown: current portfolio drawdown at the time of calculation.
- reserve: base reserve ratio (e.g., 0.05 means 5% of the portfolio value).
- portfolioValue: current portfolio value (in currency units).
- amplificationFactor: base multiplier for reserve amplification (default is 1.25).
- amplificationSensitivity: exponential growth rate of amplification per growStreak step (default is 0.1).
Returns
calculated target cash reserve in portfolio currency units.
Replaces outliers in a time series using the Hampel filter and a selected replacement strategy.
This function detects anomalies using the Hampel method and replaces them according to the specified strategy. It is designed for use in financial time series, sensor data, or any numerical sequences requiring robust cleaning before further analysis (e.g., volatility estimation, trend modeling, probability forecasting).
Available replacement strategies:
"neighborAvg": average of adjacent neighbors (default). Best for stable, low-noise time series where local continuity matters.
"prev": previous non-outlier value. Suitable for cumulative or trend-sensitive series, avoids abrupt distortions.
"const": fixed fallback value. Recommended when anomalies reflect technical failures (e.g., spikes due to API glitches).
"medianWindow": local window median (uses medianWindow size). Robust to single-point noise and short bursts of volatility; good for candle data.
"rollingMean": centered rolling mean over the window (same as a Hampel window). Applies smooth correction while preserving a general shape; works well for low-volatility assets.
Parameters
- series: input time series as a Pandas Series of floats.
- window: sliding window size used in Hampel filtering (
5
by default). - sigma: threshold multiplier for anomaly detection (
3
by default). - scaleFactor: scaling factor for the MAD (
1.4826
by default, optimal for Gaussian data). - strategy: strategy used to replace detected outliers (see the list above).
- fallbackValue: constantly used as a fallback in "const" strategy or when neighbors are missing.
- medianWindow: window size used for the "medianWindow" strategy.
Returns
cleaned time series as a Pandas Series with outliers replaced.
Calculates logarithmic returns for a time series of prices.
Parameters
- series: A series of close prices.
Returns
A series of log returns.
Computes the mean return from a log-return series.
Parameters
- logReturns: A series of log returns.
Returns
The average return.
Computes the sample standard deviation of log returns using specified Bessel correction.
Parameters
- logReturns: A series of log returns.
- ddof: Degrees of freedom for Bessel's correction (1 by default, use 2 per methodology).
Returns
Volatility (standard deviation).
Computes the standardized deviation (z-score) using geometric Brownian motion with drift and volatility.
Parameters
- logTargetRatio: Logarithm of (targetPrice / currentPrice).
- meanReturn: Estimated mean of log returns (μ).
- volatility: Estimated volatility of log returns (σ).
- horizon: Forecast horizon (number of candles).
Returns
z-score value (float).
Combines two conditional probabilities using Bayesian aggregation.
Parameters
- p1: First probability.
- p2: Second probability.
Returns
Aggregated probability using Bayesian fusion.
Computes a dynamic weight coefficient based on relative volatility of two timeframes.
Parameters
- sigmaLow: Volatility from the lower timeframe (faster/shorter interval).
- sigmaHigh: Volatility from the higher timeframe (slower/longer interval).
Returns
Weight alpha in the range [0.0, 1.0], prioritizing a higher timeframe when its volatility is higher.
Calculates a simple moving average (SMA) using a sliding window over a NumPy array with running sum optimization.
Parameters
- array: A NumPy array of input data (e.g., closing prices).
- window: The size of the rolling window for calculating the average. Must be a positive integer.
Returns
A NumPy array containing the rolling mean values, with NaNs for positions before the first full window.
Calculates a rolling standard deviation over a NumPy array using a sliding window.
Parameters
- array: A NumPy array of input data (e.g., closing prices).
- window: The size of the rolling window for calculating standard deviation.
- ddof: Delta degrees of freedom. Default is
1
.
Returns
A NumPy array containing the rolling standard deviation values.
Calculates Bollinger Bands (BBANDS) using a fast NumPy-based implementation.
Parameters
- close: Series or array of closing prices.
- length: Rolling window size for the moving average and standard deviation. The default is
5
. - std: Number of standard deviations to determine the width of the bands. The default is
2.0
. - ddof: Delta degrees of freedom for standard deviation calculation. Default is
0
. - offset: How many periods to offset the resulting bands. The default is
0
. kwargs: Optional keyword arguments are forwarded for filling missing values.
Supported options (with default values):
fillna
(None
): Value to fill missing data points (NaN values).fill_method
(None
): Method to fill missing data points (e.g.,ffill
,bfill
).
Returns
A pandas DataFrame containing the following columns: -
lower
: Lower Bollinger Band. -mid
: Middle band (simple moving average). -upper
: Upper Bollinger Band. -bandwidth
: Percentage bandwidth between upper and lower bands. -percent
: Position of the close price within the bands (from0
to1
). ReturnsNone
if the input is invalid.
Calculates the Parabolic SAR (PSAR) indicator using a fast NumPy-based implementation.
Parameters
- high: Series or array of high prices.
- low: Series or array of low prices.
- af0: Initial Acceleration Factor. The default is
0.02
. - af: Acceleration Factor (not used separately, defaults to
af0
). Default isNone
. - maxAf: Maximum Acceleration Factor. The default is
0.2
. - offset: How many periods to offset the resulting arrays. The default is
0
. kwargs: Optional keyword arguments are forwarded for filling missing values.
Supported options (with default values):
fillna
(None
): Value to fill missing data points (NaN values).fill_method
(None
): Method to fill missing values (e.g.,ffill
,bfill
).
Returns
A pandas DataFrame containing the following columns: -
long
: SAR points for long trends (upward movement). -short
: SAR points for short trends (downward movement). -af
: Acceleration Factor values over time. -reversal
:1
if reversal detected on this candle, otherwise0
. ReturnsNone
if input is invalid.
Fast estimation of Hurst exponent using the rescaled range (R/S) method.
Parameters
- series: NumPy array of prices (1D).
Returns
Hurst exponent ∈ [0.0, 1.0].
Fast Sample Entropy for chaos estimation.
Parameters
- series: NumPy array of floats.
- embeddingDim: Embedding dimension m.
- tolerance: Tolerance r.
Returns
Sample entropy.
Fast Detrended Fluctuation Analysis (DFA) estimator.
Parameters
- series: NumPy array of floats.
- scale: Box size for detrending.
Returns
Scaling exponent alpha.
Dispatches to a selected chaos estimation model.
Parameters
- series: NumPy array of floats.
- model: One of the values
hurst
: fast estimation of Hurst exponent using the rescaled range (R/S) method,FastHurst()
;sampen
: fast Sample Entropy for chaos estimation, seeFastSampEn()
;dfa
: fast Detrended Fluctuation Analysis (DFA) estimator, seeFastDfa()
.
Returns
Chaos value.
Converts chaos metric value into the trust coefficient in [0.0, 1.0].
For supported models:
- hurst: symmetric parabola with peak at 0.5
;
- dfa: linear fade from 0.5
;
- sampen: inverse — low entropy means high trust.
Parameters
- value: Chaos measures value.
- model: One of the values
hurst
: fast estimation of Hurst exponent using the rescaled range (R/S) method,FastHurst()
;sampen
: fast Sample Entropy for chaos estimation, seeFastSampEn()
;dfa
: fast Detrended Fluctuation Analysis (DFA) estimator, seeFastDfa()
.
Returns
Confidence coefficient ∈ [0.0, 1.0].
Normalized phase location of price in [0.0, 1.0] inside the Bollinger channel.
Parameters
- price: Current price of the asset.
- lower: Lower Bollinger band.
- upper: Upper Bollinger band.
Returns
Phase position ∈ [0.0, 1.0], where 0.0 is bottom, 1.0 is top.
Trust modifier based on the current phase and direction of signal.
Parameters
- phase: Normalized price position ∈ [0.0, 1.0] in the channel.
- direction: Signal direction — must be
Buy
orSell
.
Returns
Confidence modifier ∈ [0.0, 1.0].
Final-adjusted probability after applying chaos and phase trust modifiers.
Parameters
- pModel: Base probability from the main model.
- chaos: Chaos trust coefficient ∈ [0.0, 1.0].
- phase: Phase trust coefficient ∈ [0.0, 1.0].
- wModel: Weight for the original model probability.
- wChaos: Weight for chaos confidence.
- wPhase: Weight for phase confidence.
Returns
Adjusted probability ∈ [0.0, 1.0].
Estimates the probability of reaching a target price using two price series from different timeframes. Implements full methodology: log returns, volatility with Bessel correction, effective drift, z-score, cumulative probability, Bayesian aggregation, volatility-based weighting, and fuzzy classification.
References:
(RU article) https://teletype.in/@tgilmullin/target-probability Will the Price Hit the Target: Assessing Probability Instead of Guessing.
(RU article on which the formulas are based) Statistical Estimation of the Probability of Reaching a Target Price Considering Volatility and Returns Across Different Timeframes.
Parameters
- seriesLowTF: A close-price series from the lower timeframe.
- seriesHighTF: A close-price series from the higher timeframe.
- currentPrice: The current price of the asset.
- targetPrice: The target price to be reached or exceeded.
- horizonLowTF: The forecast horizon in candles for the lower timeframe.
- horizonHighTF: The forecast horizon in candles for the higher timeframe.
- ddof: Degrees of freedom for volatility estimation (use 2 as per article).
- cleanWithHampel: If
True
, applies outlier cleaning to both input series before computing log returns usingHampelCleaner()
(False
by default). Recommended for real market data where spikes, anomalies, or gaps may distort volatility and probability estimates. - chaosTrust: Trust coefficient based on chaos metric ∈ [0.0, 1.0]. Default is
1.0
(no modification), see alsoChaosConfidence()
. - phaseTrust: Trust coefficient based on Bollinger-band phase ∈ [0.0, 1.0]. Default is
1.0
(no modification), see alsoPhaseConfidence()
. - kwargs: Optional keyword arguments are forwarded to
HampelCleaner()
ifcleanWithHampel
isTrue
. Supported options (with default values):window
(5): Sliding window size forHampelCleaner()
.sigma
(3): Threshold multiplier for anomaly detection.scaleFactor
(1.4826
): Scaling factor for MAD.strategy
(defaultneighborAvg
): Outlier replacement strategy: •neighborAvg
– average of adjacent neighbors. Good for a smooth, low-noise series. •prev
– previous valid value. Preserves a trend direction. •const
– constant fallback. Use for API glitches or corrupted data. •medianWindow
– local median window. Best default for real-world candles. •rollingMean
– centered mean smoothing for low-volatility series.fallbackValue
(0.0
): Constant value for use inconst
strategy or edge cases.medianWindow
(3
): Window size for"medianWindow"
strategy.
Returns
A tuple
(pFinal, fFinal)
, where: -pFinal
is a float in range[0.0, 1.0]
— final adjusted probability of reaching the target, optionally modified by chaos and phase confidence if enabled. -fFinal
is a fuzzy label: one of["Min", "Low", "Med", "High", "Max"]
, based onpFinal
.
Determines the actual number of decimal digits used in values (not minimal sufficient, but visible). Fast and suitable for float formatting before saving to CSV.
Parameters
- values: List, array or Series of floats.
- maxDigits: Max digits to detect (default
15
). - sampleSize: How many values to sample if an array is large.
Returns
actual max number of digits after the decimal point.