Skip to content

factory module

Factory for building indicators.

Run for the examples below:

>>> from vectorbtpro import *

>>> price = pd.DataFrame({
...     'a': [1, 2, 3, 4, 5],
...     'b': [5, 4, 3, 2, 1]
... }, index=pd.date_range("2020", periods=5)).astype(float)
>>> price
            a    b
2020-01-01  1.0  5.0
2020-01-02  2.0  4.0
2020-01-03  3.0  3.0
2020-01-04  4.0  2.0
2020-01-05  5.0  1.0

build_columns function

build_columns(
    params,
    input_columns,
    level_names=None,
    hide_levels=None,
    single_value=None,
    param_settings=None,
    per_column=False,
    ignore_ranges=False,
    **kwargs
)

For each parameter in params, create a new column level with parameter values and stack it on top of input_columns.


combine_indicator_with_other function

combine_indicator_with_other(
    other,
    np_func
)

Combine IndicatorBase with other compatible object.


combine_objs function

combine_objs(
    obj,
    other,
    combine_func,
    *args,
    level_name=None,
    keys=None,
    allow_multiple=True,
    **kwargs
)

Combines/compares obj to other, for example, to generate signals.

Both will broadcast together. Pass other as a tuple or a list to compare with multiple arguments. In this case, a new column level will be created with the name level_name.

See BaseAccessor.combine().


indicator function

indicator(
    *args,
    **kwargs
)

Shortcut for IndicatorFactory.get_indicator().


pandas_ta function

pandas_ta(
    *args,
    **kwargs
)

Shortcut for IndicatorFactory.from_pandas_ta().


prepare_params function

prepare_params(
    params,
    param_names,
    param_settings,
    input_shape=None,
    to_2d=False,
    context=None
)

Prepare parameters.

Resolves references and performs broadcasting to the input shape.

Returns prepared parameters as well as whether the user provided a single parameter combination.


smc function

smc(
    *args,
    **kwargs
)

Shortcut for IndicatorFactory.from_smc().


ta function

ta(
    *args,
    **kwargs
)

Shortcut for IndicatorFactory.from_ta().


talib function

talib(
    *args,
    **kwargs
)

Shortcut for IndicatorFactory.from_talib().


techcon function

techcon(
    *args,
    **kwargs
)

Shortcut for IndicatorFactory.from_techcon().


technical function

technical(
    *args,
    **kwargs
)

Shortcut for IndicatorFactory.from_technical().


wqa101 function

wqa101(
    *args,
    **kwargs
)

Shortcut for IndicatorFactory.from_wqa101().


IndicatorBase class

IndicatorBase(
    wrapper,
    input_list,
    input_mapper,
    in_output_list,
    output_list,
    param_list,
    mapper_list,
    short_name,
    **kwargs
)

Indicator base class.

Properties should be set before instantiation.

Superclasses

Inherited members

Subclasses


column_stack class method

IndicatorBase.column_stack(
    *objs,
    wrapper_kwargs=None,
    reindex_kwargs=None,
    **kwargs
)

Stack multiple IndicatorBase instances along columns x parameters.

Uses ArrayWrapper.column_stack() to stack the wrappers.

All objects to be merged must have the same index.


dropna method

IndicatorBase.dropna(
    include_all=True,
    **kwargs
)

Drop missing values.

Keyword arguments are passed to pd.Series.dropna or pd.DataFrame.dropna.


in_output_names method

Names of the in-place output arrays.


indexing_func method

IndicatorBase.indexing_func(
    *args,
    wrapper_meta=None,
    **kwargs
)

Perform indexing on IndicatorBase.


input_names method

Names of the input arrays.


items method

IndicatorBase.items(
    group_by='params',
    apply_group_by=False,
    keep_2d=False,
    key_as_index=False
)

Iterate over columns (or groups if grouped and Wrapping.group_select is True).

Allows the following additional options for group_by: "all_params", "params" (only those that aren't hidden), and parameter names.


lazy_output_names method

Names of the lazy output arrays.


level_names property

Column level names corresponding to each parameter.


main_output property

Get main output.

It's either the only output or an output that matches the short name of the indicator.


output_flags method

Dictionary of output flags.


output_names method

Names of the regular output arrays.


param_defaults class variable


param_names method

Names of the parameters.


rename method

IndicatorBase.rename(
    short_name
)

Replace the short name of the indicator.


row_stack class method

IndicatorBase.row_stack(
    *objs,
    wrapper_kwargs=None,
    **kwargs
)

Stack multiple IndicatorBase instances along rows.

Uses ArrayWrapper.row_stack() to stack the wrappers.

All objects to be merged must have the same columns x parameters.


run class method

IndicatorBase.run(
    *args,
    **kwargs
)

Public run method.


run_combs class method

IndicatorBase.run_combs(
    *args,
    **kwargs
)

Public run combinations method.


run_pipeline class method

IndicatorBase.run_pipeline(
    num_ret_outputs,
    custom_func,
    *args,
    require_input_shape=False,
    input_shape=None,
    input_index=None,
    input_columns=None,
    inputs=None,
    in_outputs=None,
    in_output_settings=None,
    broadcast_named_args=None,
    broadcast_kwargs=None,
    template_context=None,
    params=None,
    param_product=False,
    random_subset=None,
    param_settings=None,
    run_unique=False,
    silence_warnings=False,
    per_column=None,
    keep_pd=False,
    to_2d=True,
    pass_packed=False,
    pass_input_shape=None,
    pass_wrapper=False,
    pass_param_index=False,
    pass_final_index=False,
    pass_single_comb=False,
    level_names=None,
    hide_levels=None,
    build_col_kwargs=None,
    return_raw=False,
    use_raw=None,
    wrapper_kwargs=None,
    seed=None,
    **kwargs
)

A pipeline for running an indicator, used by IndicatorFactory.

Args

num_ret_outputs : int
The number of output arrays returned by custom_func.
custom_func : callable

A custom calculation function.

See IndicatorFactory.with_custom_func().

*args
Arguments passed to the custom_func.
require_input_shape : bool

Whether to input shape is required.

Will set pass_input_shape to True and raise an error if input_shape is None.

input_shape : tuple

Shape to broadcast each input to.

Can be passed to custom_func. See pass_input_shape.

input_index : index_like

Sets index of each input.

Can be used to label index if no inputs passed.

input_columns : index_like

Sets columns of each input.

Can be used to label columns if no inputs passed.

inputs : mapping or sequence of array_like

A mapping or sequence of input arrays.

Use mapping to also supply names. If sequence, will convert to a mapping using input_{i} key.

in_outputs : mapping or sequence of array_like

A mapping or sequence of in-place output arrays.

Use mapping to also supply names. If sequence, will convert to a mapping using in_output_{i} key.

in_output_settings : dict or sequence of dict

Settings corresponding to each in-place output.

If mapping, should contain keys from in_outputs.

Following keys are accepted:

  • dtype: Create this array using this data type and np.empty. Default is None.
broadcast_named_args : dict

Dictionary with named arguments to broadcast together with inputs.

You can then pass argument names wrapped with Rep and this method will substitute them by their corresponding broadcasted objects.

broadcast_kwargs : dict
Keyword arguments passed to broadcast() to broadcast inputs.
template_context : dict
Context used to substitute templates in args and kwargs.
params : mapping or sequence of any

A mapping or sequence of parameters.

Use mapping to also supply names. If sequence, will convert to a mapping using param_{i} key.

Each element is either an array-like object or a single value of any type.

param_product : bool
Whether to build a Cartesian product out of all parameters.
random_subset : int
Number of parameter combinations to pick randomly.
param_settings : dict or sequence of dict

Settings corresponding to each parameter.

If mapping, should contain keys from params.

Following keys are accepted:

  • dtype: If data type is an enumerated type or other mapping, and a string as parameter value was passed, will convert it first.
  • dtype_kwargs: Keyword arguments passed to the function processing the data type. If data type is enumerated, it will be map_enum_fields().
  • is_tuple: If tuple was passed, it will be considered as a single value. To treat it as multiple values, pack it into a list.
  • is_array_like: If array-like object was passed, it will be considered as a single value. To treat it as multiple values, pack it into a list.
  • template: Template to substitute each parameter value with, before broadcasting to input.
  • min_one_dim: Whether to convert any scalar into a one-dimensional array. Works only if bc_to_input is False.
  • bc_to_input: Whether to broadcast parameter to input size. You can also broadcast parameter to an axis by passing an integer.
  • broadcast_kwargs: Keyword arguments passed to broadcast().
  • per_column: Whether each parameter value can be split by columns such that it can be better reflected in a multi-index. Does not affect broadcasting.
  • post_index_func: Function to convert the final index level of the parameter. Defaults to None.
run_unique : bool

Whether to run only on unique parameter combinations.

Disable if two identical parameter combinations can lead to different results (e.g., due to randomness) or if inputs are large and custom_func is fast.

Note

Cache, raw output, and output objects outside of num_ret_outputs will be returned for unique parameter combinations only.

silence_warnings : bool
Whether to hide warnings such as coming from run_unique.
per_column : bool

Whether the values of each parameter should be split by columns.

Defaults to False. Will pass per_column if it's not None.

Each list of parameter values will broadcast to the number of columns and each parameter value will be applied per column rather than per whole input. Input shape must be known beforehand.

Each from inputs, in-outputs, and parameters will be passed to custom_func with the full shape. Expects the outputs be of the same shape as inputs.

keep_pd : bool
Whether to keep inputs as pandas objects, otherwise convert to NumPy arrays.
to_2d : bool
Whether to reshape inputs to 2-dim arrays, otherwise keep as-is.
pass_packed : bool

Whether to pass inputs and parameters to custom_func as lists.

If custom_func is Numba-compiled, passes tuples.

pass_input_shape : bool

Whether to pass input_shape to custom_func as keyword argument.

Defaults to True if require_input_shape is True, otherwise to False.

pass_wrapper : bool
Whether to pass the input wrapper to custom_func as keyword argument.
pass_param_index : bool
Whether to pass parameter index.
pass_final_index : bool
Whether to pass final index.
pass_single_comb : bool
Whether to pass whether there is only one parameter combination.
level_names : list of str

A list of column level names corresponding to each parameter.

Must have the same length as params.

hide_levels : list of int or str
A list of level names or indices of parameter levels to hide.
build_col_kwargs : dict
Keyword arguments passed to build_columns().
return_raw : bool or str

Whether to return raw outputs and hashed parameter tuples without further post-processing.

Pass "outputs" to only return outputs.

use_raw : bool
Takes the raw results and uses them instead of running custom_func.
wrapper_kwargs : dict
Keyword arguments passed to ArrayWrapper.
seed : int
Seed to make output deterministic.
**kwargs

Keyword arguments passed to the custom_func.

Some common arguments include return_cache to return cache and use_cache to use cache. If use_cache is False, disables caching completely. Those are only applicable to custom_func that supports it (custom_func created using IndicatorFactory.with_apply_func() are supported by default).

Returns

Array wrapper, list of inputs (np.ndarray), input mapper (np.ndarray), list of outputs (np.ndarray), list of parameter arrays (np.ndarray), list of parameter mappers (np.ndarray), list of outputs that are outside of num_ret_outputs.


short_name method

Name of the indicator.


to_dict method

IndicatorBase.to_dict(
    include_all=True
)

Return outputs as a dict.


to_frame method

IndicatorBase.to_frame(
    include_all=True
)

Return outputs as a DataFrame.


unpack method

IndicatorBase.unpack()

Return outputs, either one output or a tuple if there are multiple.


IndicatorFactory class

IndicatorFactory(
    class_name=None,
    class_docstring=None,
    module_name='vectorbtpro.indicators.factory',
    short_name=None,
    prepend_name=True,
    input_names=None,
    param_names=None,
    in_output_names=None,
    output_names=None,
    output_flags=None,
    lazy_outputs=None,
    attr_settings=None,
    metrics=None,
    stats_defaults=None,
    subplots=None,
    plots_defaults=None,
    **kwargs
)

Class with an initialization config.

All subclasses of Configured are initialized using Config, which makes it easier to pickle.

Settings are defined under configured.

Warning

If any attribute has been overwritten that isn't listed in Configured._writeable_attrs, or if any Configured.__init__ argument depends upon global defaults, their values won't be copied over. Make sure to pass them explicitly to make that the saved & loaded / copied instance is resilient to any changes in globals.

A factory for creating new indicators.

Initialize IndicatorFactory to create a skeleton and then use a class method such as IndicatorFactory.with_custom_func() to bind a calculation function to the skeleton.

Args

class_name : str
Name for the created indicator class.
class_docstring : str
Docstring for the created indicator class.
module_name : str
Name of the module the class originates from.
short_name : str

Short name of the indicator.

Defaults to lower-case class_name.

prepend_name : bool
Whether to prepend short_name to each parameter level.
input_names : list of str
List with input names.
param_names : list of str
List with parameter names.
in_output_names : list of str

List with in-output names.

An in-place output is an output that is not returned but modified in-place. Some advantages of such outputs include:

1) they don't need to be returned, 2) they can be passed between functions as easily as inputs, 3) they can be provided with already allocated data to safe memory, 4) if data or default value are not provided, they are created empty to not occupy memory.

output_names : list of str
List with output names.
output_flags : dict
Dictionary of in-place and regular output flags.
lazy_outputs : dict
Dictionary with user-defined functions that will be bound to the indicator class and wrapped with property if not already wrapped.
attr_settings : dict

Dictionary with attribute settings.

Attributes can be input_names, in_output_names, output_names, and lazy_outputs.

Following keys are accepted:

  • dtype: Data type used to determine which methods to generate around this attribute. Set to None to disable. Default is np.float_. Can be set to instance of collections.namedtuple acting as enumerated type, or any other mapping; It will then create a property with suffix readable that contains data in a string format.
  • enum_unkval: Value to be considered as unknown. Applies to enumerated data types only.
  • make_cacheable: Whether to make the property cacheable. Applies to inputs only.
metrics : dict

Metrics supported by StatsBuilderMixin.stats().

If dict, will be converted to Config.

stats_defaults : callable or dict

Defaults for StatsBuilderMixin.stats().

If dict, will be converted into a property.

subplots : dict

Subplots supported by PlotsBuilderMixin.plots().

If dict, will be converted to Config.

plots_defaults : callable or dict

Defaults for PlotsBuilderMixin.plots().

If dict, will be converted into a property.

**kwargs
Custom keyword arguments passed to the config.

Note

The __init__ method is not used for running the indicator, for this use run. The reason for this is indexing, which requires a clean __init__ method for creating a new indicator object with newly indexed attributes.

Superclasses

Inherited members

Subclasses


Indicator property

Built indicator class.


attr_settings property

Dictionary with attribute settings.


class_docstring property

Docstring for the created indicator class.


class_name property

Name for the created indicator class.


custom_indicators class variable


deregister_custom_indicator class method

IndicatorFactory.deregister_custom_indicator(
    name=None,
    location=None,
    remove_location=True
)

Deregister a custom indicator by its name and location.

If location is None, deregisters all indicators with the same name across all custom locations.


find_smc_indicator class method

IndicatorFactory.find_smc_indicator(
    func_name,
    raise_error=True
)

Get smartmoneyconcepts indicator class by its name.


find_ta_indicator class method

IndicatorFactory.find_ta_indicator(
    cls_name
)

Get ta() indicator class by its name.


find_technical_indicator class method

IndicatorFactory.find_technical_indicator(
    func_name
)

Get technical() indicator function by its name.


from_custom_techcon class method

IndicatorFactory.from_custom_techcon(
    consensus_cls,
    factory_kwargs=None,
    **kwargs
)

Create an indicator based on a technical consensus class subclassing technical.consensus.consensus.Consensus.

Requires Technical library: https://github.com/freqtrade/technical


from_expr class method

IndicatorFactory.from_expr(
    expr,
    parse_annotations=True,
    factory_kwargs=None,
    magnet_inputs=None,
    magnet_in_outputs=None,
    magnet_params=None,
    func_mapping=None,
    res_func_mapping=None,
    use_pd_eval=None,
    pd_eval_kwargs=None,
    return_clean_expr=False,
    **kwargs
)

Build an indicator class from an indicator expression.

Args

expr : str

Expression.

Expression must be a string with a valid Python code. Supported are both single-line and multi-line expressions.

parse_annotations : bool
Whether to parse annotations starting with @.
factory_kwargs : dict

Keyword arguments passed to IndicatorFactory.

Only applied when calling the class method.

magnet_inputs : iterable of str

Names recognized as input names.

Defaults to open, high, low, close, and volume.

magnet_in_outputs : iterable of str

Names recognized as in-output names.

Defaults to an empty list.

magnet_params : iterable of str

Names recognized as params names.

Defaults to an empty list.

func_mapping : mapping

Mapping merged over expr_func_config.

Each key must be a function name and each value must be a dict with func and optionally magnet_inputs, magnet_in_outputs, and magnet_params.

res_func_mapping : mapping

Mapping merged over expr_res_func_config.

Each key must be a function name and each value must be a dict with func and optionally magnet_inputs, magnet_in_outputs, and magnet_params.

use_pd_eval : bool

Whether to use pd.eval.

Defaults to False.

Otherwise, uses multiline_eval().

Hint

By default, operates on NumPy objects using NumExpr. If you want to operate on Pandas objects, set keep_pd to True.

pd_eval_kwargs : dict
Keyword arguments passed to pd.eval.
return_clean_expr : bool
Whether to return a cleaned expression.
**kwargs
Keyword arguments passed to IndicatorFactory.with_apply_func().

Returns

Indicator Searches each variable name parsed from expr in

expr_func_config and expr_res_func_config can be overridden with func_mapping and res_func_mapping respectively.

Note

Each variable name is case-sensitive.

When using the class method, all names are parsed from the expression itself. If any of open, high, low, close, and volume appear in the expression or in magnet_inputs in either expr_func_config or expr_res_func_config, they are automatically added to input_names. Set magnet_inputs to an empty list to disable this logic.

If the expression begins with a valid variable name and a colon (:), the variable name will be used as the name of the generated class. Provide another variable in the square brackets after this one and before the colon to specify the indicator's short name.

If parse_annotations is True, variables that start with @ have a special meaning:

  • @in_*: input variable
  • @inout_*: in-output variable
  • @p_*: parameter variable
  • @out_*: output variable
  • @out_*:: indicates that the next part until a comma is an output
  • @talib_*: name of a TA-Lib function. Uses the indicator's apply_func.
  • @res_*: name of the indicator to resolve automatically. Input names can overlap with those of other indicators, while all other information gets a prefix with the indicator's short name.
  • @settings(*): settings to be merged with the current IndicatorFactory.from_expr() settings. Everything within the parentheses gets evaluated using the Pythons eval command and must be a dictionary. Overrides defaults but gets overridden by any argument passed to this method. Arguments expr and parse_annotations cannot be overridden.

Note

The parsed names come in the same order they appear in the expression, not in the execution order, apart from the magnet input names, which are added in the same order they appear in the list.

The number of outputs is derived based on the number of commas outside of any bracket pair. If there is only one output, the output name is out. If more - out1, out2, etc.

Any information can be overridden using factory_kwargs.

Usage

>>> WMA = vbt.IF(
...     class_name='WMA',
...     input_names=['close'],
...     param_names=['window'],
...     output_names=['wma']
... ).from_expr("wm_mean_nb(close, window)")

>>> wma = WMA.run(price, window=[2, 3])
>>> wma.wma
wma_window                   2                   3
                   a         b         a         b
2020-01-01       NaN       NaN       NaN       NaN
2020-01-02  1.666667  4.333333       NaN       NaN
2020-01-03  2.666667  3.333333  2.333333  3.666667
2020-01-04  3.666667  2.333333  3.333333  2.666667
2020-01-05  4.666667  1.333333  4.333333  1.666667
  • The same can be achieved by calling the class method and providing prefixes to the variable names to indicate their type:
>>> expr = "WMA: @out_wma:wm_mean_nb((@in_high + @in_low) / 2, @p_window)"
>>> WMA = vbt.IF.from_expr(expr)
>>> wma = WMA.run(price + 1, price, window=[2, 3])
>>> wma.wma
wma_window                   2                   3
                   a         b         a         b
2020-01-01       NaN       NaN       NaN       NaN
2020-01-02  2.166667  4.833333       NaN       NaN
2020-01-03  3.166667  3.833333  2.833333  4.166667
2020-01-04  4.166667  2.833333  3.833333  3.166667
2020-01-05  5.166667  1.833333  4.833333  2.166667
  • Magnet names are recognized automatically:
>>> expr = "WMA: @out_wma:wm_mean_nb((high + low) / 2, @p_window)"
  • Most settings of this method can be overriden from within the expression:
>>> expr = """
... @settings({factory_kwargs={'class_name': 'WMA', 'param_names': ['window']}})
... @out_wma:wm_mean_nb((high + low) / 2, window)
... """

from_pandas_ta class method

IndicatorFactory.from_pandas_ta(
    func_name,
    parse_kwargs=None,
    factory_kwargs=None,
    **kwargs
)

Build an indicator class around a pandas_ta() function.

Requires pandas-ta installed.

Args

func_name : str
Function name.
parse_kwargs : dict
Keyword arguments passed to IndicatorFactory.parse_pandas_ta_config().
factory_kwargs : dict
Keyword arguments passed to IndicatorFactory.
**kwargs
Keyword arguments passed to IndicatorFactory.with_apply_func().

Returns

Indicator Usage

>>> SMA = vbt.IF.from_pandas_ta('SMA')

>>> sma = SMA.run(price, length=[2, 3])
>>> sma.sma
sma_length         2         3
              a    b    a    b
2020-01-01  NaN  NaN  NaN  NaN
2020-01-02  1.5  4.5  NaN  NaN
2020-01-03  2.5  3.5  2.0  4.0
2020-01-04  3.5  2.5  3.0  3.0
2020-01-05  4.5  1.5  4.0  2.0
  • To get help on running the indicator, use phelp():
>>> vbt.phelp(SMA.run)
SMA.run(
    close,
    length=Default(value=None),
    talib=Default(value=None),
    offset=Default(value=None),
    short_name='sma',
    hide_params=None,
    hide_default=True,
    **kwargs
):
    Run `SMA` indicator.

    * Inputs: `close`
    * Parameters: `length`, [talib()](https://vectorbt.pro/pvt_12537e02/api/indicators/factory/#vectorbtpro.indicators.factory.talib "vectorbtpro.indicators.factory.talib"), `offset`
    * Outputs: `sma`

    Pass a list of parameter names as `hide_params` to hide their column levels, or True to hide all.
    Set `hide_default` to False to show the column levels of the parameters with a default value.

    Other keyword arguments are passed to `SMA.run_pipeline`.
  • To get the indicator docstring, use the help command or print the __doc__ attribute:
>>> print(SMA.__doc__)
Simple Moving Average (SMA)

The Simple Moving Average is the classic moving average that is the equally
weighted average over n periods.

Sources:
    <https://www.tradingtechnologies.com/help/x-study/technical-indicator-definitions/simple-moving-average-sma/>

Calculation:
    Default Inputs:
        length=10
    SMA = SUM(close, length) / length

Args:
    close (pd.Series): Series of 'close's
    length (int): It's period. Default: 10
    offset (int): How many periods to offset the result. Default: 0

Kwargs:
    adjust (bool): Default: True
    presma (bool, optional): If True, uses SMA for initial value.
    fillna (value, optional): pd.DataFrame.fillna(value)
    fill_method (value, optional): Type of fill method

Returns:
    pd.Series: New feature generated.

from_smc class method

IndicatorFactory.from_smc(
    func_name,
    collapse=True,
    parse_kwargs=None,
    factory_kwargs=None,
    **kwargs
)

Build an indicator class around a smartmoneyconcepts function.

Requires smart-money-concepts installed.

Args

func_name : str
Function name.
collapse : bool
Whether to collapse all nested indicators into a single one.
parse_kwargs : dict
Keyword arguments passed to IndicatorFactory.parse_smc_config().
factory_kwargs : dict
Keyword arguments passed to IndicatorFactory.
**kwargs
Keyword arguments passed to IndicatorFactory.with_apply_func().

from_ta class method

IndicatorFactory.from_ta(
    cls_name,
    factory_kwargs=None,
    **kwargs
)

Build an indicator class around a ta() class.

Requires ta installed.

Args

cls_name : str
Class name.
factory_kwargs : dict
Keyword arguments passed to IndicatorFactory.
**kwargs
Keyword arguments passed to IndicatorFactory.with_apply_func().

Returns

Indicator Usage

>>> SMAIndicator = vbt.IF.from_ta('SMAIndicator')

>>> sma = SMAIndicator.run(price, window=[2, 3])
>>> sma.sma_indicator
smaindicator_window    2         3
                       a    b    a    b
2020-01-01           NaN  NaN  NaN  NaN
2020-01-02           1.5  4.5  NaN  NaN
2020-01-03           2.5  3.5  2.0  4.0
2020-01-04           3.5  2.5  3.0  3.0
2020-01-05           4.5  1.5  4.0  2.0
  • To get help on running the indicator, use phelp():
>>> vbt.phelp(SMAIndicator.run)
SMAIndicator.run(
    close,
    window,
    fillna=Default(value=False),
    short_name='smaindicator',
    hide_params=None,
    hide_default=True,
    **kwargs
):
    Run `SMAIndicator` indicator.

    * Inputs: `close`
    * Parameters: `window`, `fillna`
    * Outputs: `sma_indicator`

    Pass a list of parameter names as `hide_params` to hide their column levels, or True to hide all.
    Set `hide_default` to False to show the column levels of the parameters with a default value.

    Other keyword arguments are passed to `SMAIndicator.run_pipeline`.
  • To get the indicator docstring, use the help command or print the __doc__ attribute:
>>> print(SMAIndicator.__doc__)
SMA - Simple Moving Average

    Args:
        close(pandas.Series): dataset 'Close' column.
        window(int): n period.
        fillna(bool): if True, fill nan values.

from_talib class method

IndicatorFactory.from_talib(
    func_name,
    factory_kwargs=None,
    **kwargs
)

Build an indicator class around a talib() function.

Requires TA-Lib installed.

For input, parameter and output names, see docs.

Args

func_name : str
Function name.
factory_kwargs : dict
Keyword arguments passed to IndicatorFactory.
**kwargs
Keyword arguments passed to IndicatorFactory.with_apply_func().

Returns

Indicator Usage

>>> SMA = vbt.IF.from_talib('SMA')

>>> sma = SMA.run(price, timeperiod=[2, 3])
>>> sma.real
sma_timeperiod         2         3
                  a    b    a    b
2020-01-01      NaN  NaN  NaN  NaN
2020-01-02      1.5  4.5  NaN  NaN
2020-01-03      2.5  3.5  2.0  4.0
2020-01-04      3.5  2.5  3.0  3.0
2020-01-05      4.5  1.5  4.0  2.0
  • To get help on running the indicator, use phelp():
>>> vbt.phelp(SMA.run)
SMA.run(
    close,
    timeperiod=Default(value=30),
    timeframe=Default(value=None),
    short_name='sma',
    hide_params=None,
    hide_default=True,
    **kwargs
):
    Run `SMA` indicator.

    * Inputs: `close`
    * Parameters: `timeperiod`, `timeframe`
    * Outputs: `real`

    Pass a list of parameter names as `hide_params` to hide their column levels, or True to hide all.
    Set `hide_default` to False to show the column levels of the parameters with a default value.

    Other keyword arguments are passed to `SMA.run_pipeline`.
  • To plot an indicator:
>>> sma.plot(column=(2, 'a')).show()


from_techcon class method

IndicatorFactory.from_techcon(
    cls_name,
    **kwargs
)

Create an indicator from a preset technical consensus.

Supported are case-insensitive values MACON (or MovingAverageConsensus), OSCCON (or OscillatorConsensus), and SUMCON (or SummaryConsensus).


from_technical class method

IndicatorFactory.from_technical(
    func_name,
    parse_kwargs=None,
    factory_kwargs=None,
    **kwargs
)

Build an indicator class around a technical() function.

Requires technical installed.

Args

func_name : str
Function name.
parse_kwargs : dict
Keyword arguments passed to IndicatorFactory.parse_technical_config().
factory_kwargs : dict
Keyword arguments passed to IndicatorFactory.
**kwargs
Keyword arguments passed to IndicatorFactory.with_apply_func().

Returns

Indicator Usage

>>> ROLLING_MEAN = vbt.IF.from_technical("ROLLING_MEAN")

>>> rolling_mean = ROLLING_MEAN.run(price, window=[3, 4])
>>> rolling_mean.rolling_mean
rolling_mean_window         3         4
                       a    b    a    b
2020-01-01           NaN  NaN  NaN  NaN
2020-01-02           NaN  NaN  NaN  NaN
2020-01-03           2.0  4.0  NaN  NaN
2020-01-04           3.0  3.0  2.5  3.5
2020-01-05           4.0  2.0  3.5  2.5
  • To get help on running the indicator, use phelp():
>>> vbt.phelp(ROLLING_MEAN.run)
ROLLING_MEAN.run(
    close,
    window=Default(value=200),
    min_periods=Default(value=None),
    short_name='rolling_mean',
    hide_params=None,
    hide_default=True,
    **kwargs
):
    Run `ROLLING_MEAN` indicator.

    * Inputs: `close`
    * Parameters: `window`, `min_periods`
    * Outputs: `rolling_mean`

    Pass a list of parameter names as `hide_params` to hide their column levels, or True to hide all.
    Set `hide_default` to False to show the column levels of the parameters with a default value.

    Other keyword arguments are passed to `ROLLING_MEAN.run_pipeline`.

from_wqa101 class method

IndicatorFactory.from_wqa101(
    alpha_idx,
    **kwargs
)

Build an indicator class from one of the WorldQuant's 101 alpha expressions.

See wqa101_expr_config.

Note

Some expressions that utilize cross-sectional operations require columns to be a multi-index with a level sector, subindustry, or industry.

Usage

>>> data = vbt.YFData.pull(['BTC-USD', 'ETH-USD'])

>>> WQA1 = vbt.IF.from_wqa101(1)
>>> wqa1 = WQA1.run(data.get('Close'))
>>> wqa1.out
symbol                     BTC-USD  ETH-USD
Date
2014-09-17 00:00:00+00:00     0.25     0.25
2014-09-18 00:00:00+00:00     0.25     0.25
2014-09-19 00:00:00+00:00     0.25     0.25
2014-09-20 00:00:00+00:00     0.25     0.25
2014-09-21 00:00:00+00:00     0.25     0.25
...                            ...      ...
2022-01-21 00:00:00+00:00     0.00     0.50
2022-01-22 00:00:00+00:00     0.00     0.50
2022-01-23 00:00:00+00:00     0.25     0.25
2022-01-24 00:00:00+00:00     0.50     0.00
2022-01-25 00:00:00+00:00     0.50     0.00

[2688 rows x 2 columns]
  • To get help on running the indicator, use phelp():
>>> vbt.phelp(WQA1.run)
WQA1.run(
    close,
    short_name='wqa1',
    hide_params=None,
    hide_default=True,
    **kwargs
):
    Run `WQA1` indicator.

    * Inputs: `close`
    * Outputs: `out`

    Pass a list of parameter names as `hide_params` to hide their column levels, or True to hide all.
    Set `hide_default` to False to show the column levels of the parameters with a default value.

    Other keyword arguments are passed to `WQA1.run_pipeline`.

get_custom_indicator class method

IndicatorFactory.get_custom_indicator(
    name,
    location=None,
    return_first=False
)

Get a custom indicator.


get_indicator class method

IndicatorFactory.get_indicator(
    name,
    location=None
)

Get the indicator class by its name.

The name can contain a location suffix followed by a colon. For example, "talib:sma" or "talib_sma" will return the TA-Lib's SMA. Without a location, the indicator will be searched throughout all indicators, including the vectorbt's ones.


in_output_names property

List with in-output names.


input_names property

List with input names.


lazy_outputs property

Dictionary with user-defined functions that will become properties.


list_builtin_locations class method

IndicatorFactory.list_builtin_locations()

List built-in locations.

Appear in the order as defined by the author.


list_custom_indicators class method

IndicatorFactory.list_custom_indicators(
    uppercase=False,
    location=None,
    prepend_location=None
)

List custom indicators.


list_custom_locations class method

IndicatorFactory.list_custom_locations()

List custom locations.

Appear in the order they were registered.


list_indicators class method

IndicatorFactory.list_indicators(
    pattern=None,
    case_sensitive=False,
    use_regex=False,
    location=None,
    prepend_location=None
)

List indicators, optionally matching a pattern.

Pattern can also be a location, in such a case all indicators from that location will be returned. For supported locations, see IndicatorFactory.list_locations().


list_locations class method

IndicatorFactory.list_locations()

List all supported locations.

First come custom locations, then built-in locations.


list_pandas_ta_indicators class method

IndicatorFactory.list_pandas_ta_indicators(
    silence_warnings=True,
    **kwargs
)

List all parseable indicators in pandas_ta().

Note

Returns only the indicators that have been successfully parsed.


list_smc_indicators class method

IndicatorFactory.list_smc_indicators(
    silence_warnings=True,
    **kwargs
)

List all parseable indicators in smartmoneyconcepts.


list_ta_indicators class method

IndicatorFactory.list_ta_indicators(
    uppercase=False
)

List all parseable indicators in ta().


list_talib_indicators class method

IndicatorFactory.list_talib_indicators()

List all parseable indicators in talib().


list_techcon_indicators class method

IndicatorFactory.list_techcon_indicators()

List all consensus indicators in technical().


list_technical_indicators class method

IndicatorFactory.list_technical_indicators(
    silence_warnings=True,
    **kwargs
)

List all parseable indicators in technical().


list_vbt_indicators class method

IndicatorFactory.list_vbt_indicators()

List all vectorbt indicators.


list_wqa101_indicators class method

IndicatorFactory.list_wqa101_indicators()

List all WorldQuant's 101 alpha indicators.


match_location class method

IndicatorFactory.match_location(
    location
)

Match location.


metrics property

Metrics supported by StatsBuilderMixin.stats().


module_name property

Name of the module the class originates from.


output_flags property

Dictionary of in-place and regular output flags.


output_names property

List with output names.


param_names property

List with parameter names.


parse_pandas_ta_config class method

IndicatorFactory.parse_pandas_ta_config(
    func,
    test_input_names=None,
    test_index_len=100,
    silence_warnings=False,
    **kwargs
)

Parse the config of a pandas_ta() indicator.


parse_smc_config class method

IndicatorFactory.parse_smc_config(
    func,
    collapse=True,
    snake_case=True
)

Parse the config of a smartmoneyconcepts indicator.


parse_ta_config class method

IndicatorFactory.parse_ta_config(
    ind_cls
)

Parse the config of a ta() indicator.


parse_technical_config class method

IndicatorFactory.parse_technical_config(
    func,
    test_index_len=100
)

Parse the config of a technical() indicator.


plots_defaults property

Defaults for PlotsBuilderMixin.plots().


prepend_name property

Whether to prepend IndicatorFactory.short_name to each parameter level.


register_custom_indicator class method

IndicatorFactory.register_custom_indicator(
    indicator,
    name=None,
    location=None,
    if_exists='raise'
)

Register a custom indicator under a custom location.

Argument if_exists can be "raise", "skip", or "override".


short_name property

Short name of the indicator.


split_indicator_name class method

IndicatorFactory.split_indicator_name(
    name
)

Split an indicator name into location and actual name.


stats_defaults property

Defaults for StatsBuilderMixin.stats().


subplots property

Subplots supported by PlotsBuilderMixin.plots().


with_apply_func method

IndicatorFactory.with_apply_func(
    apply_func,
    cache_func=None,
    takes_1d=False,
    select_params=True,
    pass_packed=False,
    cache_pass_packed=None,
    pass_per_column=False,
    cache_pass_per_column=None,
    forward_skipna=False,
    kwargs_as_args=None,
    jit_kwargs=None,
    **kwargs
)

Build indicator class around a custom apply function.

In contrast to IndicatorFactory.with_custom_func(), this method handles a lot of things for you, such as caching, parameter selection, and concatenation. Your part is writing a function apply_func that accepts a selection of parameters (single values as opposed to multiple values in IndicatorFactory.with_custom_func()) and does the calculation. It then automatically concatenates the resulting arrays into a single array per output.

While this approach is simpler, it's also less flexible, since we can only work with one parameter selection at a time and can't view all parameters.

The execution and concatenation is performed using apply_and_concat_each().

Note

If apply_func is a Numba-compiled function:

  • All inputs are automatically converted to NumPy arrays
  • Each argument in *args must be of a Numba-compatible type
  • You cannot pass keyword arguments
  • Your outputs must be arrays of the same shape, data type and data order

Note

Reserved arguments such as per_column (in this order) get passed as positional arguments if jitted_loop is True, otherwise as keyword arguments.

Args

apply_func : callable

A function that takes inputs, selection of parameters, and other arguments, and does calculations to produce outputs.

Arguments are passed to apply_func in the following order:

  • i (index of the parameter combination) if select_params is set to False
  • input_shape if pass_input_shape is set to True and input_shape not in kwargs_as_args
  • Input arrays corresponding to input_names. Passed as a tuple if pass_packed, otherwise unpacked. If select_params is True, each argument is a list composed of multiple arrays - one per parameter combination. When per_column is True, each of those arrays corresponds to a column. Otherwise, they all refer to the same array. If takes_1d, each array gets additionally split into multiple column arrays. Still passed as a single array to the caching function.
  • In-output arrays corresponding to in_output_names. Passed as a tuple if pass_packed, otherwise unpacked. If select_params is True, each argument is a list composed of multiple arrays - one per parameter combination. When per_column is True, each of those arrays corresponds to a column. If takes_1d, each array gets additionally split into multiple column arrays. Still passed as a single array to the caching function.
  • Parameters corresponding to param_names. Passed as a tuple if pass_packed, otherwise unpacked. If select_params is True, each argument is a list composed of multiple values - one per parameter combination. When per_column is True, each of those values corresponds to a column. If takes_1d, each value gets additionally repeated by the number of columns in the input arrays.
  • Variable arguments if var_args is set to True
  • per_column if pass_per_column is set to True and per_column not in kwargs_as_args and jitted_loop is set to True
  • Arguments listed in kwargs_as_args passed as positional. Can include takes_1d and per_column.
  • Other keyword arguments if jitted_loop is False. Also includes takes_1d and per_column if they must be passed and not in kwargs_as_args.

Can be Numba-compiled (but doesn't have to).

Note

Shape of each output must be the same and match the shape of each input.

cache_func : callable

A caching function to preprocess data beforehand.

Takes the same arguments as apply_func. Must return a single object or a tuple of objects. All returned objects will be passed unpacked as last arguments to apply_func.

Can be Numba-compiled (but doesn't have to).

takes_1d : bool

Whether to split 2-dim arrays into multiple 1-dim arrays along the column axis.

Gets applied on inputs and in-outputs, while parameters get repeated by the number of columns.

select_params : bool

Whether to automatically select in-outputs and parameters.

If False, prepends the current iteration index to the arguments.

pass_packed : bool
Whether to pass packed tuples for inputs, in-place outputs, and parameters.
cache_pass_packed : bool
Overrides pass_packed for the caching function.
pass_per_column : bool
Whether to pass per_column.
cache_pass_per_column : bool
Overrides pass_per_column for the caching function.
forward_skipna : bool
Whether to forward skipna to the apply function.
kwargs_as_args : iterable of str

Keyword arguments from kwargs dict to pass as positional arguments to the apply function.

Should be used together with jitted_loop set to True since Numba doesn't support variable keyword arguments.

Defaults to []. Order matters.

jit_kwargs : dict

Keyword arguments passed to @njit decorator of the parameter selection function.

By default, has nogil set to True.

**kwargs
Keyword arguments passed to IndicatorFactory.with_custom_func(), all the way down to apply_and_concat_each().

Returns

Indicator Usage

>>> @njit
... def apply_func_nb(ts1, ts2, p1, p2, arg1, arg2):
...     return ts1 * p1 + arg1, ts2 * p2 + arg2

>>> MyInd = vbt.IF(
...     input_names=['ts1', 'ts2'],
...     param_names=['p1', 'p2'],
...     output_names=['out1', 'out2']
... ).with_apply_func(
...     apply_func_nb, var_args=True,
...     kwargs_as_args=['arg2'], arg2=200)

>>> myInd = MyInd.run(price, price * 2, [1, 2], [3, 4], 100)
>>> myInd.out1
custom_p1              1             2
custom_p2              3             4
                a      b      a      b
2020-01-01  101.0  105.0  102.0  110.0
2020-01-02  102.0  104.0  104.0  108.0
2020-01-03  103.0  103.0  106.0  106.0
2020-01-04  104.0  102.0  108.0  104.0
2020-01-05  105.0  101.0  110.0  102.0
>>> myInd.out2
custom_p1              1             2
custom_p2              3             4
                a      b      a      b
2020-01-01  206.0  230.0  208.0  240.0
2020-01-02  212.0  224.0  216.0  232.0
2020-01-03  218.0  218.0  224.0  224.0
2020-01-04  224.0  212.0  232.0  216.0
2020-01-05  230.0  206.0  240.0  208.0
  • To change the execution engine or specify other engine-related arguments, use execute_kwargs:
>>> import time

>>> def apply_func(ts, p):
...     time.sleep(1)
...     return ts * p

>>> MyInd = vbt.IF(
...     input_names=['ts'],
...     param_names=['p'],
...     output_names=['out']
... ).with_apply_func(apply_func)

>>> %timeit MyInd.run(price, [1, 2, 3])
3.02 s ± 3.47 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

>>> %timeit MyInd.run(price, [1, 2, 3], execute_kwargs=dict(engine='dask'))
1.02 s ± 2.67 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

with_custom_func method

IndicatorFactory.with_custom_func(
    custom_func,
    require_input_shape=False,
    param_settings=None,
    in_output_settings=None,
    hide_params=None,
    hide_default=True,
    var_args=False,
    keyword_only_args=False,
    **pipeline_kwargs
)

Build indicator class around a custom calculation function.

In contrast to IndicatorFactory.with_apply_func(), this method offers full flexibility. It's up to the user to handle caching and concatenate columns for each parameter (for example, by using apply_and_concat()). Also, you must ensure that each output array has an appropriate number of columns, which is the number of columns in input arrays multiplied by the number of parameter combinations.

Args

custom_func : callable

A function that takes broadcast arrays corresponding to input_names, broadcast in-place output arrays corresponding to in_output_names, broadcast parameter arrays corresponding to param_names, and other arguments and keyword arguments, and returns outputs corresponding to output_names and other objects that are then returned with the indicator instance.

Can be Numba-compiled.

Note

Shape of each output must be the same and match the shape of each input stacked n times (= the number of parameter values) along the column axis.

require_input_shape : bool
Whether to input shape is required.
param_settings : dict

A dictionary of parameter settings keyed by name. See IndicatorBase.run_pipeline() for keys.

Can be overwritten by any run method.

in_output_settings : dict

A dictionary of in-place output settings keyed by name. See IndicatorBase.run_pipeline() for keys.

Can be overwritten by any run method.

hide_params : bool or list of str

Parameter names to hide column levels for, or whether to hide all parameters.

Can be overwritten by any run method.

hide_default : bool

Whether to hide column levels of parameters with default value.

Can be overwritten by any run method.

var_args : bool

Whether run methods should accept variable arguments (*args).

Set to True if custom_func accepts positional agruments that are not listed in the config.

keyword_only_args : bool

Whether run methods should accept keyword-only arguments (*).

Set to True to force the user to use keyword arguments (e.g., to avoid misplacing arguments).

**pipeline_kwargs

Keyword arguments passed to IndicatorBase.run_pipeline().

Can be overwritten by any run method.

Can contain default values and also references to other arguments wrapped with Ref.

Returns

Indicator, and optionally other objects that are returned by custom_func and exceed output_names. Usage

>>> @njit
>>> def apply_func_nb(i, ts1, ts2, p1, p2, arg1, arg2):
...     return ts1 * p1[i] + arg1, ts2 * p2[i] + arg2

>>> @njit
... def custom_func(ts1, ts2, p1, p2, arg1, arg2):
...     return vbt.base.combining.apply_and_concat_multiple_nb(
...         len(p1), apply_func_nb, ts1, ts2, p1, p2, arg1, arg2)

>>> MyInd = vbt.IF(
...     input_names=['ts1', 'ts2'],
...     param_names=['p1', 'p2'],
...     output_names=['o1', 'o2']
... ).with_custom_func(custom_func, var_args=True, arg2=200)

>>> myInd = MyInd.run(price, price * 2, [1, 2], [3, 4], 100)
>>> myInd.o1
custom_p1              1             2
custom_p2              3             4
                a      b      a      b
2020-01-01  101.0  105.0  102.0  110.0
2020-01-02  102.0  104.0  104.0  108.0
2020-01-03  103.0  103.0  106.0  106.0
2020-01-04  104.0  102.0  108.0  104.0
2020-01-05  105.0  101.0  110.0  102.0
>>> myInd.o2
custom_p1              1             2
custom_p2              3             4
                a      b      a      b
2020-01-01  206.0  230.0  208.0  240.0
2020-01-02  212.0  224.0  216.0  232.0
2020-01-03  218.0  218.0  224.0  224.0
2020-01-04  224.0  212.0  232.0  216.0
2020-01-05  230.0  206.0  240.0  208.0

The difference between apply_func_nb here and in IndicatorFactory.with_apply_func() is that here it takes the index of the current parameter combination that can be used for parameter selection.

  • You can also remove the entire apply_func_nb and define your logic in custom_func (which shouldn't necessarily be Numba-compiled):
>>> @njit
... def custom_func(ts1, ts2, p1, p2, arg1, arg2):
...     input_shape = ts1.shape
...     n_params = len(p1)
...     out1 = np.empty((input_shape[0], input_shape[1] * n_params), dtype=np.float_)
...     out2 = np.empty((input_shape[0], input_shape[1] * n_params), dtype=np.float_)
...     for k in range(n_params):
...         for col in range(input_shape[1]):
...             for i in range(input_shape[0]):
...                 out1[i, input_shape[1] * k + col] = ts1[i, col] * p1[k] + arg1
...                 out2[i, input_shape[1] * k + col] = ts2[i, col] * p2[k] + arg2
...     return out1, out2