tksbrokerapi.TradeRoutines

TKSBrokerAPI-Logo

Technologies · Knowledge · Science

gift

The TradeRoutines library contains some methods used by trade scenarios implemented with TKSBrokerAPI module.

NANO = 1e-09

SI-constant: NANO = 10^-9

FUZZY_SCALE = <fuzzyroutines.FuzzyRoutines.UniversalFuzzyScale object>

Universal Fuzzy Scale is a special set of fuzzy levels: {Min, Low, Med, High, Max}.

FUZZY_LEVELS = ['Min', 'Low', 'Med', 'High', 'Max']

Level names on Universal Fuzzy Scale FUZZY_SCALE. Default: ["Min", "Low", "Med", "High", "Max"].

SIGNAL_FILTER = {'Max': {'Max': 'Max', 'High': 'High', 'Med': 'Med', 'Low': 'Low', 'Min': 'Min'}, 'High': {'Max': 'High', 'High': 'Med', 'Med': 'Low', 'Low': 'Min', 'Min': 'Min'}, 'Med': {'Max': 'Med', 'High': 'Low', 'Med': 'Min', 'Low': 'Min', 'Min': 'Min'}, 'Low': {'Max': 'Low', 'High': 'Min', 'Med': 'Min', 'Low': 'Min', 'Min': 'Min'}, 'Min': {'Max': 'Min', 'High': 'Min', 'Med': 'Min', 'Low': 'Min', 'Min': 'Min'}}

Signal filter by default is used to reduce signal strength.

OPENING_RULES = Min Low Med High Max Min False False True True True Low False False True True True Med False False True True True High False False False False False Max False False False False False

Opening positions rules depend of fuzzy Risk/Reach levels.

This is the author's technique, proposed by Timur Gilmullin and Mansur Gilmullin, based on fuzzy scales for measuring the levels of fuzzy risk and reachable. The following simple diagram explains what do we mean as Open/Close Fuzzy Rules:

Open-Close-Rules-Matrix

Here in table T mean True, F mean False. 1st position is for Opening rules, 2nd position is for Closing rules. These rules are defined as transposed matrix constants OPENING_RULES and CLOSING_RULES.

See also:

Default rules for opening positions:

Risk \ Reach Min Low Med High Max
Min False False True True True
Low False False True True True
Med False False True True True
High False False False False False
Max False False False False False
CLOSING_RULES = Min Low Med High Max Min False False False False False Low False False False False False Med True True False False False High True True True False False Max True True True False False

Closing positions rules depend of fuzzy Risk/Reach levels. These rules are opposite for OPENING_RULES (see explanation there what do we mean as Open/Close Fuzzy Rules).

See also: CanClose() method.

Default rules for closing positions:

Risk \ Reach Min Low Med High Max
Min False False False False False
Low False False False False False
Med True True False False False
High True True True False False
Max True True True False False
def CanOpen(fuzzyRisk: str, fuzzyReach: str) -> bool:

Checks opening positions rules in OPENING_RULES depend on fuzzy Risk/Reach levels.

See also:

Parameters
  • fuzzyRisk: Fuzzy Risk level name.
  • fuzzyReach: Fuzzy Reach level name.
Returns

Bool. If True, then possible to open position.

def CanClose(fuzzyRisk: str, fuzzyReach: str) -> bool:

Checks closing positions rules in CLOSING_RULES depend on fuzzy Risk/Reach levels.

See also:

Parameters
  • fuzzyRisk: Fuzzy Risk level name.
  • fuzzyReach: Fuzzy Reach level name.
Returns

Bool. If True, then possible to close position.

def RiskLong(curPrice: float, pHighest: float, pLowest: float) -> dict[str, float]:

Function returns Risk as fuzzy level and percents of Risk in the range [0, 100], if you want buy from current price.

This is the author's method, proposed by Timur Gilmullin and Mansur Gilmullin, based on fuzzy scales for measuring the levels of Fuzzy Risk. The following simple diagram explains what do we mean as Fuzzy Risk level:

Fuzzy-Risk

  • If open long (buy) position from current price: RiskLong = Fuzzy(|P - L| / (H - L)). Here:
    • P is the current price,
    • L (H) is the lowest (highest) price in forecasted movements of candles chain or prognoses diapason border of price movement,
    • Fuzzy() is the fuzzyfication function that convert real values to its fuzzy representation.

See also:

Parameters
  • curPrice: Current actual price (usually the latest close price).
  • pHighest: The highest close price in forecasted movements of candles chain or prognosis of the highest diapason border of price movement.
  • pLowest: The lowest close price in forecasted movements of candles chain or prognosis of the lowest diapason border of price movement.
Returns

Dictionary with Fuzzy Risk level and Risk percents, e.g. {"riskFuzzy": "High", "riskPercent": 66.67}.

def RiskShort(curPrice: float, pHighest: float, pLowest: float) -> dict[str, float]:

Function returns Risk as fuzzy level and percents of Risk in the range [0, 100], if you want sell from current price. This method is opposite for RiskLong() (see explanation there what do we mean as Fuzzy Risk).

  • If open short (sell) position from current price: RiskShort = Fuzzy(|P - H| / (H - L)). Here:
    • P is the current price,
    • L (H) is the lowest (highest) price in forecasted movements of candles chain or prognoses diapason border of price movement,
    • Fuzzy() is the fuzzyfication function that convert real values to its fuzzy representation.
Parameters
  • curPrice: Current actual price (usually the latest close price).
  • pHighest: The highest close price in forecasted movements of candles chain or prognosis of the highest diapason border of price movement.
  • pLowest: The lowest close price in forecasted movements of candles chain or prognosis of the lowest diapason border of price movement.
Returns

Dictionary with Fuzzy Risk level and Risk percents, e.g. {"riskFuzzy": "Low", "riskPercent": 20.12}.

def ReachLong(pClosing: pandas.core.series.Series) -> dict[str, float]:

The Fuzzy Reach is a value of forecast reachable of price (highest or lowest close). In this function we calculate the reachability of the highest close price.

This is the author's method, proposed by Timur Gilmullin and Mansur Gilmullin, based on fuzzy scales for measuring the levels of Fuzzy Reach. The following simple diagram explains what is meant by the Fuzzy Reach level:

Fuzzy-Reach

There are fuzzy levels and percents in the range [0, 100] for the maximum and minimum forecasted close prices. Prognosis horizon is divided by 5 parts of time diapason: I, II, III, IV and V, from the first forecasted candle (or price) to the last forecasted candle (or price).

Every part correlate with some fuzzy level, depending on the distance in time from current actual candle (or price):

  • I = Max,
  • II = High,
  • III = Med,
  • IV = Low,
  • V = Min.

This function search for first fuzzy level appropriate to the part of time diapason, where the close price (highest or lowest) located in this diapason. Of course, you can use other price chains (open, high, low) instead of candle close prices, but usually this is not recommended.

Recommendation. If you have no prognosis chain of candles just use "Med" Fuzzy Reach level.

See also:

Parameters
  • pClosing: Pandas Series with prognosis chain of closing prices of candles. This is "close prices" in OHLCV-formatted candles chain. The forecasted prices are indexed starting from zero, this is the first candle of the forecast. The last price of the forecast is the "farthest" relative to the current actual close price.
Returns

Dictionary with Fuzzy Reach level and Reach percents for the highest close price, e.g. {"reachFuzzy": "Low", "reachPercent": 20.12}.

def ReachShort(pClosing: pandas.core.series.Series) -> dict[str, float]:

The Fuzzy Reach is a value of forecast reachable of price (highest or lowest close). This method is similar like ReachLong() (see explanation there what do we mean as Fuzzy Reach), but in this case we calculate the reachability of the lowest close price.

Parameters
  • pClosing: Pandas Series with prognosis chain of closing prices of candles. This is "close prices" in OHLCV-formatted candles chain. The forecasted prices are indexed starting from zero, this is the first candle of the forecast. The last price of the forecast is the "farthest" relative to the current actual close price. Recommendation. If you have no prognosis chain of candles just use "Med" Fuzzy Reach level.
Returns

Dictionary with Fuzzy Reach level and Reach percents for the lowest close price, e.g. {"reachFuzzy": "High", "reachPercent": 66.67}.

def FormatTimedelta(timeDelta: datetime.timedelta, precision: int = 0) -> str:

Format a timedelta object into a readable string like "0:00:12" or "0:00:12.34".

This function supports fixed-point precision formatting for seconds. Fractional seconds are rounded to the specified number of digits. No brackets are added — raw string like "H:MM:SS.ss" is returned.

If the precision is invalid (not an int or out of range [0..6]), the function will return the default string representation of the timedelta.

Examples:

FormatTimedelta(timedelta(seconds=12.987), precision=0) -> "0:00:12" FormatTimedelta(timedelta(seconds=12.987), precision=1) -> "0:00:12.9" FormatTimedelta(timedelta(seconds=12.987), precision=2) -> "0:00:12.99" FormatTimedelta(timedelta(seconds=12.987), precision=5) -> "0:00:12.98700" FormatTimedelta(timedelta(seconds=12.987), precision="bad") -> "0:00:12.987000"

Parameters
  • timeDelta: Timedelta object to format.
  • precision: Integer from 0 to 6 — how many digits to keep after seconds.
Returns

Formatted time string like "0:00:12" or "0:00:12.34". If precision is invalid, returns str(timeDelta).

def GetDatesAsString( start: str = None, end: str = None, userFormat: str = '%Y-%m-%d', outputFormat: str = '%Y-%m-%dT%H:%M:%SZ') -> tuple[str, str]:

Create tuple of date and time strings with timezone parsed from user-friendly date.

Warning! All dates must be in UTC time zone!

User dates format must be like: "%Y-%m-%d", e.g. "2020-02-03" (3 Feb, 2020).

Output date is UTC ISO time format by default: "%Y-%m-%dT%H:%M:%SZ".

Example input: start="2022-06-01", end="2022-06-20" -> output: ("2022-06-01T00:00:00Z", "2022-06-20T23:59:59Z"). An error exception will occur if input date has incorrect format.

If start=None, end=None then return dates from yesterday to the end of the day.

If start=some_date_1, end=None then return dates from some_date_1 to the end of the day.

If start=some_date_1, end=some_date_2 then return dates from start of some_date_1 to end of some_date_2.

Start day may be negative integer numbers: -1, -2, -3 — how many days ago.

Also, you can use keywords for start if end=None:

  • today (from 00:00:00 to the end of current day),
  • yesterday (-1 day from 00:00:00 to 23:59:59),
  • week (-7 day from 00:00:00 to the end of current day),
  • month (-30 day from 00:00:00 to the end of current day),
  • year (-365 day from 00:00:00 to the end of current day).
Parameters
  • start: start day in format defined by userFormat or keyword.
  • end: end day in format defined by userFormat.
  • userFormat: user-friendly date format, e.g. "%Y-%m-%d".
  • outputFormat: output string date format.
Returns

tuple with 2 strings ("start", "end"). Example of return is ("2022-06-01T00:00:00Z", "2022-06-20T23:59:59Z"). Second string is the end of the last day. Tuple ("", "") returned if errors occurred.

def NanoToFloat(units: str, nano: int) -> float:

Convert number in nano-view mode with string parameter units and integer parameter nano to float view.

Examples:

  • NanoToFloat(units="2", nano=500000000) -> 2.5
  • NanoToFloat(units="0", nano=50000000) -> 0.05
Parameters
  • units: integer string or integer parameter that represents the integer part of number
  • nano: integer string or integer parameter that represents the fractional part of number
Returns

float view of number. If an error occurred, then returns 0..

def FloatToNano(number: float) -> dict[str, int]:

Convert float number to nano-type view: dictionary with string units and integer nano parameters {"units": "string", "nano": integer}.

Examples:

  • FloatToNano(number=2.5) -> {"units": "2", "nano": 500000000}
  • FloatToNano(number=0.05) -> {"units": "0", "nano": 50000000}
Parameters
  • number: float number.
Returns

nano-type view of number: {"units": "string", "nano": integer}. If an error occurred, then returns {"units": "0", "nano": 0}.

def UpdateClassFields(instance: object, params: dict) -> None:

This method get config as dictionary (preloaded from YAML file) and apply key: value as names of class fields and values of class fields. Example for class TradeScenario: config["tickers"] = ["TICKER1", "TICKER2"] ==> TradeScenario(TinkoffBrokerServer).tickers = ["TICKER1", "TICKER2"].

Parameters
  • instance: instance of class to parametrize.
  • **params: dict with all parameters in key**: value format. It will be nothing with object if an error occurred.
def SeparateByEqualParts( elements: list[typing.Any], parts: int = 2, union: bool = True) -> list[list[typing.Any]]:

Gets input list and try to separate it by equal parts of elements.

Examples:

  • SeparateByEqualParts([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], parts=2) -> [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]
  • SeparateByEqualParts([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], parts=2, union=True) -> [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9, 10]]
  • SeparateByEqualParts([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], parts=2, union=False) -> [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10]]]
  • SeparateByEqualParts([1, 2, 3], parts=2, union=True) -> [[1], [2, 3]]
  • SeparateByEqualParts([1, 2, 3], parts=2, union=False) -> [[1], [2], [3]]

If parts > length of elements:

  • SeparateByEqualParts([1], parts=2, union=True) -> [[1]]
  • SeparateByEqualParts([1, 2, 3], parts=4, union=True) -> [[1], [2], [3]]
  • SeparateByEqualParts([1], parts=2, union=False) -> [[1], []]
  • SeparateByEqualParts([1, 2, 3], parts=4, union=False) -> [[1], [2], [3], []]
Parameters
  • elements: list of objects.
  • parts: int, numbers of equal parts of objects.
  • union: bool, if True and if the remainder of the separating not empty, then remainder part union with the last part.
Returns

list of lists with equal parts of objects. If an error occurred, then returns empty list [].

def CalculateLotsForDeal(currentPrice: float, maxCost: float, volumeInLot: int = 1) -> int:

Calculates maximum lots for deal depends on current price and volume of instrument in one lot.

Formula: lots = maxCost // (currentPrice * volumeInLot), it means max count of lots, for which will be: cost = lots * currentPrice * volumeInLot <= maxCost.

If costOneLot = currentPrice * volumeInLot > maxCost, then returned lots = 1.

If an error occurred then returned lots = 0.

Parameters
  • currentPrice: the current price of instrument, >= 0.
  • maxCost: the maximum cost of all lots of instrument in portfolio, >= 0.
  • volumeInLot: volumes of instrument in one lot, >= 1.
Returns

integer number of lots, >= 0.

def HampelFilter( series: Union[list, pandas.core.series.Series], window: int = 5, sigma: float = 3, scaleFactor: float = 1.4826) -> pandas.core.series.Series:

Outlier Detection with Hampel Filter. It can detect outliers based on a sliding window and counting difference between median values and input values of series. The Hampel filter is often considered extremely effective in practice.

For each window, we calculate the Median and the Median Absolute Deviation (MAD). If the considered observation differs from the window median by more than sigma standard deviations multiple on scaleFactor, then we treat it as an outlier.

Let Xi — elements of input series in the i-th window, s — sigma, the number of standard deviations, k — scale factor, depend on distribution (≈1.4826 for normal).

How to calculate rolling MAD: MAD(Xi) = Median(|x1 − Median (Xi)|, ..., |xn − Median(Xi)|)

What is an anomaly: A = {a | |a − Median (Xi)| > s ∙ k ∙ MAD(Xi)}

References:

  1. Gilmullin T.M., Gilmullin M.F. How to quickly find anomalies in number series using the Hampel method:
  2. Lewinson Eryk. Outlier Detection with Hampel Filter. September 26, 2019.
  3. Hancong Liu, Sirish Shah and Wei Jiang. On-line outlier detection and data cleaning. Computers and Chemical Engineering. Vol. 28, March 2004, pp. 1635–1647.
  4. Hampel F. R. The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69, 382–393, 1974.

Examples:

  • HampelFilter([1, 1, 1, 1, 1, 1], window=3) -> pd.Series([False, False, False, False, False, False])
  • HampelFilter([1, 1, 1, 2, 1, 1], window=3) -> pd.Series([False, False, False, True, False, False])
  • HampelFilter([0, 1, 1, 1, 1, 0], window=3) -> pd.Series([True, False, False, False, False, True])
  • HampelFilter([1], window=3) -> pd.Series([False])
  • HampelFilter([5, 5, 50, 5, 5], window=2) -> pd.Series([False, False, True, False, False])
  • HampelFilter([100, 1, 1, 1, 1, 100], window=2) -> pd.Series([True, False, False, False, False, True])
  • HampelFilter([1, 1, 10, 1, 10, 1, 1], window=2) -> pd.Series([False, False, True, False, True, False, False])
Parameters
  • series: Pandas Series object with numbers in which we identify outliers.
  • window: length of the sliding window (5 points by default), 1 <= window <= len(series).
  • sigma: sigma is the number of standard deviations which identify the outlier (3 sigma by default), > 0.
  • scaleFactor: constant scale factor (1.4826 by default for Gaussian distribution), > 0.
Returns

Pandas Series object with True/False values. True means that an outlier detected in that position of input series. If an error occurred, then empty series returned.

def HampelAnomalyDetection( series: Union[list, pandas.core.series.Series], compareWithMax: bool = True, **kwargs) -> Optional[int]:

Anomaly Detection function using Hampel Filter. This function returns the minimum index of elements in an anomaly list or index of the first maximum element in the input series if this index is less than anomaly element index. If the series has no anomalies, then None will be returned.

Anomaly filter is a function: F: X → {True, False}. F(xi) = True, if xi ∈ A; False, if xi ∉ A, where X — input series with xi elements, A — anomaly set.

References:

  1. Gilmullin T.M., Gilmullin M.F. How to quickly find anomalies in number series using the Hampel method. December 27, 2022.
  2. Jupyter Notebook with examples:
  3. Simple Python script demonstrates how to use Hampel Filter to determine anomalies in time series:

Examples:

  • HampelAnomalyDetection([1, 1, 1, 1, 1, 1]) -> None
  • HampelAnomalyDetection([1, 1, 1, 1, 111, 1]) -> 4
  • HampelAnomalyDetection([1, 1, 10, 1, 1, 1]) -> 2
  • HampelAnomalyDetection([111, 1, 1, 1, 1, 1]) -> 0
  • HampelAnomalyDetection([111, 1, 1, 1, 1, 111]) -> 0
  • HampelAnomalyDetection([1, 11, 1, 111, 1, 1]) -> 1
  • HampelAnomalyDetection([1, 1, 1, 111, 99, 11]) -> 3
  • HampelAnomalyDetection([1, 1, 11, 111, 1, 1, 1, 11111]) -> 2
  • HampelAnomalyDetection([1, 1, 1, 111, 111, 1, 1, 1, 1]) -> 3
  • HampelAnomalyDetection([1, 1, 1, 1, 111, 1, 1, 11111, 5555]) -> 4
  • HampelAnomalyDetection([9, 13, 12, 12, 13, 12, 12, 13, 12, 12, 13, 12, 12, 13, 12, 13, 12, 12, 1, 1]) -> 0
  • HampelAnomalyDetection([9, 13, 12, 12, 13, 12, 1000, 13, 12, 12, 300000, 12, 12, 13, 12, 2000, 1, 1, 1, 1]) -> 0

Some **kwargs parameters you can pass to HampelFilter():

  • window is the length of the sliding window (5 points by default), 1 <= window <= len(series).
  • sigma is the number of standard deviations which identify the outlier (3 sigma by default), > 0.
  • scaleFactor is the constant scale factor (1.4826 by default), > 0.
Parameters
  • series: List or Pandas Series of numeric values to check for anomalies.
  • compareWithMax: If True (default), returns min(index of anomaly, index of first maximum). If False, returns only the first anomaly index detected by HampelFilter.
  • kwargs: Additional parameters are forwarded to HampelFilter().
Returns

Index of the first anomaly (or intersection with maximum, if enabled). Returns None if no anomaly is found.

def CalculateAdaptiveCacheReserve( drawdowns: list[float], curDrawdown: float, reserve: float, portfolioValue: float, amplificationFactor: float = 1.25, amplificationSensitivity: float = 0.1) -> float:

Calculates the adaptive target cash reserve based on current and historical portfolio drawdowns.

This function dynamically adjusts the reserve allocated for averaging positions (e.g., during drawdowns). If the drawdown increases for several consecutive iterations, the reserve is amplified exponentially. If the drawdown stabilizes or decreases, the reserve is reset to the base level.

The amplification is computed as: amplification = amplificationFactor × exp(growStreak × amplificationSensitivity)

Where:

  • growStreak is the number of consecutive days the drawdown has been increasing, including the current day.
  • amplificationFactor is the base multiplier.
  • amplificationSensitivity controls how aggressively the amplification grows with each additional drawdown increase.

Example:

With amplificationFactor = 1.25 and amplificationSensitivity = 0.1, if the drawdown increases for 3 days: amplification = 1.25 × exp(0.3) ≈ 1.25 × 1.3499 ≈ 1.687

Parameters
  • drawdowns: historical portfolio drawdowns (fractions between 0 and 1); drawdowns[0] is the oldest, drawdowns[-1] is the most recent.
  • curDrawdown: current portfolio drawdown at the time of calculation.
  • reserve: base reserve ratio (e.g., 0.05 means 5% of the portfolio value).
  • portfolioValue: current portfolio value (in currency units).
  • amplificationFactor: base multiplier for reserve amplification (default is 1.25).
  • amplificationSensitivity: exponential growth rate of amplification per growStreak step (default is 0.1).
Returns

calculated target cash reserve in portfolio currency units.

def HampelCleaner( series: pandas.core.series.Series, window: int = 5, sigma: float = 3, scaleFactor: float = 1.4826, strategy: str = 'neighborAvg', fallbackValue: float = 0.0, medianWindow: int = 3) -> pandas.core.series.Series:

Replaces outliers in a time series using the Hampel filter and a selected replacement strategy.

This function detects anomalies using the Hampel method and replaces them according to the specified strategy. It is designed for use in financial time series, sensor data, or any numerical sequences requiring robust cleaning before further analysis (e.g., volatility estimation, trend modeling, probability forecasting).

Available replacement strategies:

  • "neighborAvg": average of adjacent neighbors (default). Best for stable, low-noise time series where local continuity matters.

  • "prev": previous non-outlier value. Suitable for cumulative or trend-sensitive series, avoids abrupt distortions.

  • "const": fixed fallback value. Recommended when anomalies reflect technical failures (e.g., spikes due to API glitches).

  • "medianWindow": local window median (uses medianWindow size). Robust to single-point noise and short bursts of volatility; good for candle data.

  • "rollingMean": centered rolling mean over the window (same as a Hampel window). Applies smooth correction while preserving a general shape; works well for low-volatility assets.

Parameters
  • series: input time series as a Pandas Series of floats.
  • window: sliding window size used in Hampel filtering (5 by default).
  • sigma: threshold multiplier for anomaly detection (3 by default).
  • scaleFactor: scaling factor for the MAD (1.4826 by default, optimal for Gaussian data).
  • strategy: strategy used to replace detected outliers (see the list above).
  • fallbackValue: constantly used as a fallback in "const" strategy or when neighbors are missing.
  • medianWindow: window size used for the "medianWindow" strategy.
Returns

cleaned time series as a Pandas Series with outliers replaced.

def LogReturns(series: pandas.core.series.Series) -> pandas.core.series.Series:

Calculates logarithmic returns for a time series of prices.

Parameters
  • series: A series of close prices.
Returns

A series of log returns.

def MeanReturn(logReturns: pandas.core.series.Series) -> float:

Computes the mean return from a log-return series.

Parameters
  • logReturns: A series of log returns.
Returns

The average return.

def Volatility(logReturns: pandas.core.series.Series, ddof: int = 1) -> float:

Computes the sample standard deviation of log returns using specified Bessel correction.

Parameters
  • logReturns: A series of log returns.
  • ddof: Degrees of freedom for Bessel's correction (1 by default, use 2 per methodology).
Returns

Volatility (standard deviation).

def ZScore( logTargetRatio: float, meanReturn: float, volatility: float, horizon: int) -> float:

Computes the standardized deviation (z-score) using geometric Brownian motion with drift and volatility.

Parameters
  • logTargetRatio: Logarithm of (targetPrice / currentPrice).
  • meanReturn: Estimated mean of log returns (μ).
  • volatility: Estimated volatility of log returns (σ).
  • horizon: Forecast horizon (number of candles).
Returns

z-score value (float).

def BayesianAggregation(p1: float, p2: float) -> float:

Combines two conditional probabilities using Bayesian aggregation.

Parameters
  • p1: First probability.
  • p2: Second probability.
Returns

Aggregated probability using Bayesian fusion.

def VolatilityWeight(sigmaLow: float, sigmaHigh: float) -> float:

Computes a dynamic weight coefficient based on relative volatility of two timeframes.

Parameters
  • sigmaLow: Volatility from the lower timeframe (faster/shorter interval).
  • sigmaHigh: Volatility from the higher timeframe (slower/longer interval).
Returns

Weight alpha in the range [0.0, 1.0], prioritizing a higher timeframe when its volatility is higher.

@jit(nopython=True)
def RollingMean(array: numpy.ndarray, window: int) -> numpy.ndarray:

Calculates a simple moving average (SMA) using a sliding window over a NumPy array with running sum optimization.

Parameters
  • array: A NumPy array of input data (e.g., closing prices).
  • window: The size of the rolling window for calculating the average. Must be a positive integer.
Returns

A NumPy array containing the rolling mean values, with NaNs for positions before the first full window.

@jit(nopython=True)
def RollingStd(array: numpy.ndarray, window: int, ddof: int = 1) -> numpy.ndarray:

Calculates a rolling standard deviation over a NumPy array using a sliding window.

Parameters
  • array: A NumPy array of input data (e.g., closing prices).
  • window: The size of the rolling window for calculating standard deviation.
  • ddof: Delta degrees of freedom. Default is 1.
Returns

A NumPy array containing the rolling standard deviation values.

def FastBBands( close: Union[pandas.core.series.Series, numpy.ndarray], length: int = 5, std: float = 2.0, ddof: int = 0, offset: int = 0, **kwargs) -> Optional[pandas.core.frame.DataFrame]:

Calculates Bollinger Bands (BBANDS) using a fast NumPy-based implementation.

Parameters
  • close: Series or array of closing prices.
  • length: Rolling window size for the moving average and standard deviation. The default is 5.
  • std: Number of standard deviations to determine the width of the bands. The default is 2.0.
  • ddof: Delta degrees of freedom for standard deviation calculation. Default is 0.
  • offset: How many periods to offset the resulting bands. The default is 0.
  • kwargs: Optional keyword arguments are forwarded for filling missing values.

    Supported options (with default values):

    • fillna (None): Value to fill missing data points (NaN values).
    • fill_method (None): Method to fill missing data points (e.g., ffill, bfill).
Returns

A pandas DataFrame containing the following columns: - lower: Lower Bollinger Band. - mid: Middle band (simple moving average). - upper: Upper Bollinger Band. - bandwidth: Percentage bandwidth between upper and lower bands. - percent: Position of the close price within the bands (from 0 to 1). Returns None if the input is invalid.

def FastPSAR( high: Union[pandas.core.series.Series, numpy.ndarray], low: Union[pandas.core.series.Series, numpy.ndarray], af0: float = 0.02, af: Optional[float] = None, maxAf: float = 0.2, offset: int = 0, **kwargs) -> Optional[pandas.core.frame.DataFrame]:

Calculates the Parabolic SAR (PSAR) indicator using a fast NumPy-based implementation.

Parameters
  • high: Series or array of high prices.
  • low: Series or array of low prices.
  • af0: Initial Acceleration Factor. The default is 0.02.
  • af: Acceleration Factor (not used separately, defaults to af0). Default is None.
  • maxAf: Maximum Acceleration Factor. The default is 0.2.
  • offset: How many periods to offset the resulting arrays. The default is 0.
  • kwargs: Optional keyword arguments are forwarded for filling missing values.

    Supported options (with default values):

    • fillna (None): Value to fill missing data points (NaN values).
    • fill_method (None): Method to fill missing values (e.g., ffill, bfill).
Returns

A pandas DataFrame containing the following columns: - long: SAR points for long trends (upward movement). - short: SAR points for short trends (downward movement). - af: Acceleration Factor values over time. - reversal: 1 if reversal detected on this candle, otherwise 0. Returns None if input is invalid.

@jit(nopython=True)
def FastHurst(series: numpy.ndarray) -> float:

Fast estimation of Hurst exponent using the rescaled range (R/S) method.

Parameters
  • series: NumPy array of prices (1D).
Returns

Hurst exponent ∈ [0.0, 1.0].

@jit(nopython=True)
def FastSampEn( series: numpy.ndarray, embeddingDim: int = 2, tolerance: float = 0.2) -> float:

Fast Sample Entropy for chaos estimation.

Parameters
  • series: NumPy array of floats.
  • embeddingDim: Embedding dimension m.
  • tolerance: Tolerance r.
Returns

Sample entropy.

def FastDfa(series: numpy.ndarray, scale: int = 12) -> float:

Fast Detrended Fluctuation Analysis (DFA) estimator.

Parameters
  • series: NumPy array of floats.
  • scale: Box size for detrending.
Returns

Scaling exponent alpha.

def ChaosMeasure(series: numpy.ndarray, model: str = 'hurst') -> float:

Dispatches to a selected chaos estimation model.

Parameters
  • series: NumPy array of floats.
  • model: One of the values
    • hurst: fast estimation of Hurst exponent using the rescaled range (R/S) method, FastHurst();
    • sampen: fast Sample Entropy for chaos estimation, see FastSampEn();
    • dfa: fast Detrended Fluctuation Analysis (DFA) estimator, see FastDfa().
Returns

Chaos value.

def ChaosConfidence(value: float, model: str) -> float:

Converts chaos metric value into the trust coefficient in [0.0, 1.0].

For supported models: - hurst: symmetric parabola with peak at 0.5; - dfa: linear fade from 0.5; - sampen: inverse — low entropy means high trust.

Parameters
  • value: Chaos measures value.
  • model: One of the values
    • hurst: fast estimation of Hurst exponent using the rescaled range (R/S) method, FastHurst();
    • sampen: fast Sample Entropy for chaos estimation, see FastSampEn();
    • dfa: fast Detrended Fluctuation Analysis (DFA) estimator, see FastDfa().
Returns

Confidence coefficient ∈ [0.0, 1.0].

def PhaseLocation(price: float, lower: float, upper: float) -> float:

Normalized phase location of price in [0.0, 1.0] inside the Bollinger channel.

Parameters
  • price: Current price of the asset.
  • lower: Lower Bollinger band.
  • upper: Upper Bollinger band.
Returns

Phase position ∈ [0.0, 1.0], where 0.0 is bottom, 1.0 is top.

def PhaseConfidence(phase: float, direction: str) -> float:

Trust modifier based on the current phase and direction of signal.

Parameters
  • phase: Normalized price position ∈ [0.0, 1.0] in the channel.
  • direction: Signal direction — must be Buy or Sell.
Returns

Confidence modifier ∈ [0.0, 1.0].

def AdjustProbabilityByChaosAndPhaseTrust( pModel: float, chaos: float, phase: float, wModel: float = 1.0, wChaos: float = 0.5, wPhase: float = 0.3) -> float:

Final-adjusted probability after applying chaos and phase trust modifiers.

Parameters
  • pModel: Base probability from the main model.
  • chaos: Chaos trust coefficient ∈ [0.0, 1.0].
  • phase: Phase trust coefficient ∈ [0.0, 1.0].
  • wModel: Weight for the original model probability.
  • wChaos: Weight for chaos confidence.
  • wPhase: Weight for phase confidence.
Returns

Adjusted probability ∈ [0.0, 1.0].

def EstimateTargetReachability( seriesLowTF: Union[list, pandas.core.series.Series], seriesHighTF: Union[list, pandas.core.series.Series], currentPrice: float, targetPrice: float, horizonLowTF: int, horizonHighTF: int, ddof: int = 2, cleanWithHampel: bool = False, chaosTrust: float = 1.0, phaseTrust: float = 1.0, **kwargs) -> tuple[float, str]:

Estimates the probability of reaching a target price using two price series from different timeframes. Implements full methodology: log returns, volatility with Bessel correction, effective drift, z-score, cumulative probability, Bayesian aggregation, volatility-based weighting, and fuzzy classification.

References:

  1. (RU article) https://teletype.in/@tgilmullin/target-probability Will the Price Hit the Target: Assessing Probability Instead of Guessing.

  2. (RU article on which the formulas are based) Statistical Estimation of the Probability of Reaching a Target Price Considering Volatility and Returns Across Different Timeframes.

Parameters
  • seriesLowTF: A close-price series from the lower timeframe.
  • seriesHighTF: A close-price series from the higher timeframe.
  • currentPrice: The current price of the asset.
  • targetPrice: The target price to be reached or exceeded.
  • horizonLowTF: The forecast horizon in candles for the lower timeframe.
  • horizonHighTF: The forecast horizon in candles for the higher timeframe.
  • ddof: Degrees of freedom for volatility estimation (use 2 as per article).
  • cleanWithHampel: If True, applies outlier cleaning to both input series before computing log returns using HampelCleaner() (False by default). Recommended for real market data where spikes, anomalies, or gaps may distort volatility and probability estimates.
  • chaosTrust: Trust coefficient based on chaos metric ∈ [0.0, 1.0]. Default is 1.0 (no modification), see also ChaosConfidence().
  • phaseTrust: Trust coefficient based on Bollinger-band phase ∈ [0.0, 1.0]. Default is1.0 (no modification), see also PhaseConfidence().
  • kwargs: Optional keyword arguments are forwarded to HampelCleaner() if cleanWithHampel is True. Supported options (with default values):
    • window (5): Sliding window size for HampelCleaner().
    • sigma (3): Threshold multiplier for anomaly detection.
    • scaleFactor (1.4826): Scaling factor for MAD.
    • strategy (default neighborAvg): Outlier replacement strategy: • neighborAvg – average of adjacent neighbors. Good for a smooth, low-noise series. • prev – previous valid value. Preserves a trend direction. • const – constant fallback. Use for API glitches or corrupted data. • medianWindow – local median window. Best default for real-world candles.rollingMean – centered mean smoothing for low-volatility series.
    • fallbackValue (0.0): Constant value for use in const strategy or edge cases.
    • medianWindow (3): Window size for "medianWindow" strategy.
Returns

A tuple (pFinal, fFinal), where: - pFinal is a float in range [0.0, 1.0] — final adjusted probability of reaching the target, optionally modified by chaos and phase confidence if enabled. - fFinal is a fuzzy label: one of ["Min", "Low", "Med", "High", "Max"], based on pFinal.

def DetermineDecimalPrecision( values: Union[list, numpy.ndarray, pandas.core.series.Series], maxDigits: int = 15, sampleSize: int = 100) -> int:

Determines the actual number of decimal digits used in values (not minimal sufficient, but visible). Fast and suitable for float formatting before saving to CSV.

Parameters
  • values: List, array or Series of floats.
  • maxDigits: Max digits to detect (default 15).
  • sampleSize: How many values to sample if an array is large.
Returns

actual max number of digits after the decimal point.