Release notes for 2023¶
All notable changes in reverse chronological order.
Version 2023.12.23¶
- Implemented DuckDBDataSaver for periodically storing data in a DuckDB database
- Fixed performance of PathosEngine
- Fixed fit_pattern_nb
- Fixed staticization in Portfolio.from_signals
- Implemented a merging function
"imageio"that merges Plotly figures and generates and saves a GIF animation - UTC is no more the default timezone in AVData
- Made all indicator packages optional in IndicatorFactory.list_indicators
- Fixed processing of
benchmarkin QSAdapter - Custom indicator can now be registered with IndicatorFactory.register_custom_indicator, listed with IndicatorFactory.list_custom_indicators, and retrieved with IndicatorFactory.get_custom_indicator. Any registered custom indicator will be visible in IndicatorFactory.list_indicators.
- Outsourced creation of projection bands to GenericDFAccessor.band
- Implemented a function ptable to pretty-print a DataFrame of any size. Displays HTML when run in a notebook.
- Developed annotation functionality in utils.annotations that is now being used by many VBT parts to control behavior of function arguments
- Chunk taking specification
arg_take_speccan be parsed from function annotations - Sizer (argument
sizein @chunked) doesn't necessarily have to be provided anymore as it can be reliably parsed from most ChunkTaker instances - Standardized merging functionality and created a new module utils.merging. Keyword arguments can now be distributed across multiple merging functions when provided as a tuple. Also, merging functions can be specified via annotations (strings work too!)
- @parameterized can parse Param instances from function annotations
- Splitter.apply and its decorators can parse Takeable instances from function annotations
- Refactored the chunking functionality by moving most of the logic from @chunked to a new class Chunker, which stores the function to be chunked and the chunking configuration. By calling Chunker.run, the pipeline is run on the supplied arguments. Thus, the same pipeline can be run on different sets of arguments. The main reason for this class is that it can be easily subclassed to change the default chunking behavior; the new class can be passed (or set globally) as
chunker_cls. - Developed new user-friendly classes for chunking, such as ChunkedArray. They can be used in
arg_take_spec, annotations, and even passed as arguments for the chunk taking specification to be parsed on the fly. - Argument
minpin the exponential mean and standard deviation functions now refers to the minimum number of observations inspan, rather than from the start as it was previously - TVData.list_symbols can optionally return other fields besides symbols
- Fixed a DST issue that results in a duplicated index in QSAdapter
- Standardized the process of searching for objects in arguments. This functionality resides in utils.search and is being used by parameterization and templating.
- Similarly to Chunker being based on @chunked, there's now an equivalent class Parameterizer being based on @parameterized
- Extended combine_params to preferably generate parameter combinations lazily rather than to pick them from a (potentially large) materialized grid. This significantly reduces the overhead of generating parameter combinations when
random_subsetis provided. For example, if the user wants to select 10 random combinations from three parameters with 1000 values each, VBT won't build a grid of 1000000000 parameter combinations anymore - Adapted AVData to use alpha_vantage library by default (if installed). The API documentation parser can still be used by passing
use_parser=True. - Implemented a new data class BentoData for Databento
- Fixed SQLData for Pandas < 2.0
- Added functions disable_caching and enable_caching to globally disable and enable caching respectively. Also, implemented context managers CachingDisabled and CachingEnabled to disable and enable caching within a code block respectively. Optionally, they can take a class, instance, method, or any other cacheable object, and switch the caching behavior only for this particular object. Moreover, implemented so-called "caching rules" that can be registered globally and applied to any new setups automatically, making passive cache management possible (up to now, rules could be enforced only to setups that were already registered). Finally, renamed any methods with the suffix
statusorstatus_overviewto juststats. - NDLData supports datatables by passing
data_format="datatable" - Wrote Forecast future price trends (with projections)
- Wrote Easily cross-validate parameters to boost your trading strategy
Version 2023.10.20¶
- Implemented Data.to_feather for storing data in Feather files and FeatherData for loading data from Feather files using PyArrow
- Implemented Data.to_parquet for storing data in Parquet files and ParquetData for loading data from Parquet files using PyArrow or FastParquet
- Implemented Data.to_sql for storing data in a SQL database, SQLDataSaver for periodically storing data in a SQL database, and SQLData for loading data from a SQL database using SQLAlchemy
- Implemented Data.to_duckdb for storing data in a DuckDB database and DuckDBData for loading data from a DuckDB database
- Implemented Data.sql to run analytical SQL queries on entire data instances using DuckDB
- Improved file discovery in CSVData and HDFData
- Extended the market scanning functionality of TVClient to allow filtering by market(s) as well as additional information such as capitalization
- Redesigned Data.transform to attach correct feature/symbol classes depending on the selected mode
- Redesigned setting resolution in various classes
- Added a new execution engine MpireEngine based on mpire
- Refactored some execution engines such as PathosEngine
- Implemented classes methods PortfolioOptimizer.row_stack and PortfolioOptimizer.column_stack
- Optimized PortfolioOptimizer.from_allocations and added PortfolioOptimizer.from_initial
- Added an option
settings.plotting.auto_rangebreaksto automatically skip date ranges with no data when plotting withfig.show,fig.show_png, andfig.show_svg - Added an argument
fit_rangesin PatternRanges.plot to zoom in on one or more ranges - State, value, and returns can be saved in dynamic from-signals as well
- Fixed SIGDET on two-dimensional data
- Fixed division-by-zero errors in pattern similarity calculation
- Renamed
latest_at_indextorealign,resample_openingtorealign_opening, andresample_closingtorealign_closing - Added support for Blosc2 as the new default Blosc implementation
- Added option
HardStopin StopExitPrice to ignore open price check when executing a stop order - Fixed Common Sense Ratio (ReturnsAccessor.common_sense_ratio)
- Implemented a method to resample returns - ReturnsAccessor.resample_returns
- Fixed merging when a single object is passed
- Enabled negative
cash_earningsto be lower than the current (free) cash balance - Made parsing of timestamps and datetime indexes with
dateparseroptional - Added an option
settings.importing.clear_pycacheto clear Python cache on startup - Optimized accessors to initialize a Wrapping instance only once
- Fixed parsing of
technicalindicators - Fixed and redesigned selection based on datetime components. Slices such as
"14:30":"15:45"are working properly now. - Switched to date based release segments
- Migrated from
setup.pytopyproject.toml
Version 1.14.0 (1 Sep, 2023)¶
- Optimized simulation methods for better performance. Both Portfolio.from_orders and Portfolio.from_signals have become faster when the number of orders/signals is relatively low. This is achieved by letting them not execute a large chunk of the simulation logic whenever a NaN order or no signal is encountered.
- Implemented an ultrafast simulation method based on signals - from_basic_signals_nb - which is used automatically if you don't use stop and limit orders
- Implemented a module portfolio.nb.ctx_helpers with helper functions to be used in callbacks such as
signal_func_nb. For example, to get the current SL target price usevbt.pf_nb.get_sl_target_price_nb(c). - Introduced a new callback
post_signal_func_nbto Portfolio.from_signals. Identically topost_order_func_nbin Portfolio.from_order_func, it gets called right after an order has been executed, regardless of its success. - Adjustment function now also runs in Portfolio.from_holding
- Renamed
max_orderstomax_order_recordsandmax_logstomax_log_recordsin Portfolio - Introduced an argument
ignore_errorsto ignore just about any error when optimizing with PyPortfolioOpt and Riskfolio-Lib - Implemented a new Splitter.from_n_rolling configuration that by taking
length="optimize"optimizes the window length with SciPy such that splits cover the most of the index and test ranges don't overlap (or any other set specified viaoptimize_anchor_set) - Renamed
skip_minus_onetoskip_not_foundin point- and range-based indexing - Added the following 4 callbacks to execute:
pre_execute_func: gets called only once before executing all calls/chunkspre_chunk_func: gets called before executing a chunk (chunking must be enabled!)post_chunk_func: gets called after executing a chunk (chunking must be enabled!)post_execute_func: gets called only once after executing all calls/chunks
- @parameterized, Splitter.apply (along with the corresponding decorators), and Data.run allow the function to return
vbt.NoResultfor any iteration that should be skipped - Redesigned custom indicators
- Each indicator now has a 1-dimensional Numba-compiled function and the corresponding 2-dimensional parallelizable, chunkable function. The latter function also accepts parameters as flexible arrays, which makes possible providing one parameter value per column.
- Removed the caching functionality to avoid too much specialization
- Indicator classes were outsourced into their respective modules under the new sub-package indicators.custom
- Implemented an ADX indicator
- To quickly run and plot a TA-Lib indicator on a single set of parameter values without using the indicator factory, talib_func and talib_plot_func can be used. They take the name of an indicator and return a function that can run/plot it. In contrast to the official TA-Lib implementation, they can properly handle DataFrames, NaNs, broadcasting, and resampling. The indicator factory's both
from_talibandfrom_exprare based on these two functions. - All indicators implemented with IndicatorFactory.with_apply_func accept the argument
split_columnsto run only one column at a time by retaining the dimensionality, and the argumentskipnato automatically skip NaN (requiressplit_columns) - When passing a column full of NaN to TA-Lib, it will raise the "inputs are all NaN" error - no longer! It will simply return an output array full of NaN.
- Indicators can return raw outputs instead of an indicator instance by passing the option
return_raw="outputs". It can be used to return NumPy arrays directly if the Pandas format is not needed, but also to return arrays of arbitrary shapes since wrapping is no more needed. It also has less overhead since the instance preparation step is skipped. - Fixed treatment of NaN in IndicatorFactory.from_wqa101
- Fixed the issue of not being able to run all World Quant Alpha indicators in one go (it's possible now)
- Redesigned label generators
- Each generator now has a 1-dimensional Numba-compiled function and the corresponding 2-dimensional parallelizable, chunkable function. The latter function also accepts parameters as flexible arrays, which makes possible providing one parameter value per column.
- Adapted the pivot generation function to be consistent with PIVOTINFO
- Most functions were adapted to accept
highandlowinstead of justclose. If you don't havehighandlow, passclosetwo times. - Generator classes were outsourced into their respective modules under the new sub-package labels.generators
- Redesigned signal generators
- Variable arguments such as
*argswere converted into regular arguments such asplace_args - Generator classes were outsourced into their respective modules under the new sub-package signals.generators
- Variable arguments such as
- Implemented a method SignalsAccessor.distance_from_last to measure the distance from the last n-th signal to the current element
- Redesigned the Data class
- Data and any accompanied information can be also stored by features rather than symbols by wrapping the dictionary with feature_dict. In this case, the information is said to be feature-oriented where features are keys and symbols are columns. A data instance becomes feature-oriented if Data.data is feature-oriented.
- Behavior of features and symbols is interchangeable. For example, Data.select will select a symbol if the instance is symbol-oriented and a feature if the instance is feature-oriented, while indexing a column will select a feature if the instance is symbol-oriented and a symbol otherwise.
- Data.pull can run Data.fetch_feature to fetch features instead of symbols, same for Data.update. To switch to features, pass
keys(first argument) withkeys_are_features=Trueorfeatures. - Any instance can be switched to the opposite orientation with Data.invert
- Renamed the method
Data.fetchtoData.pullsince fetching is just downloading while pulling involves merging. The methodData.fetchstill exists for backward compatibility and calls the newData.pullunder the hood. - Renamed
column_configtofeature_configsince columns are symbols in a feature-oriented instance - Renamed
single_symboltosingle_keysince keys are features in a feature-oriented instance - Renamed
symbol_classestoclassessince keys are features in a feature-oriented instance - Data classes were outsourced into their respective modules under the new sub-package data.custom
- Redesigned unit tests for more extensive testing
- Data.run uses execute to run a sequence of functions. This can be used to parallelize feature engineering.
- Removed hard-coded timeframes from TVClient to be able to use custom timeframes such as "30 seconds"
- Token argument in TVData has been renamed from just
tokentoauth_tokento be consistent with TradingView - OHLCVDFAccessor and Data share the same functionality for managing OHLCV-based features - no more duplicated code. Also,
column_maphas been renamed tofeature_mapand now works in the opposite way: custom column names must point to standardized column names. - Split OHLCVDFAccessor.plot to OHLCVDFAccessor.plot_ohlc and OHLCVDFAccessor.plot_volume
- A custom function located under
vbt.settings.plotting.pre_show_funccan be called on a figure each time before a figure is shown. Useful for figure preparations. - New debugging tool: to check how two complex VBT objects differ, use
obj1.equals(obj2, debug=True) - Fixed issues with time-based stop orders for 32-bit Python on Windows
Version 1.13.1 (10 Jul, 2023)¶
- Enabled changing
cash_sharingin Portfolio.column_stack - Removed the size types
MNTargetValue,MNTargetPercent, andMNTargetPercent100. The corresponding regular size types can be used for a market-neutral behavior, see this discussion on Discord. - Created the alias
grouped_indexfor Grouper.get_index - The simulation methods of Portfolio such as
from_signalsnow accept a symbol as the first argument, such as "BTC-USD", which will fetch this symbol using YFData (or provide the name of the class as well usingclass_name:symbol) and use as data. Designed for quick backtesting. - Fixed a regular expression error appearing on Python 3.11
- Improved error handling when authenticating with TVClient
Version 1.13.0 (04 Jul, 2023)¶
- Added Python 3.11 support
- Rewrote all functions implemented with
numba.generated_jit(deprecated) to use overloading - Fixed backward compatibility for Pandas 1.*
- Conversion of datetimes and timedeltas to integer timestamps has been standardized across the entire codebase using to_ns
- Fixed handling of duplicate allocation indices in PortfolioOptimizer
- Index column can be disabled in CSVData
- Data.transform can be run per symbol
- Replaced the field
peak_idxbystart_idxandpeak_valbystart_valin Drawdowns for better compatibility. Also, renameddrawdown_rangestodecline_ranges. - More methods for working with indexes support ExceptLevel
- Trades can plot multiple columns at once if
group_by=Trueis provided to the method - Implemented ExtPandasIndexer as an extension class to PandasIndexer, which adds an additional indexing property
xlocutilizing get_idxs - Argument
fpsin save_animation is automatically converted toduration - Arguments
td_stopanddt_stopcan be accepted as parameters with multiple strings, such asvbt.Param(["daily", "weekly", "monthly"]) - Range breaks can be determined and applied automatically with
fig.auto_rangebreaks() - Cumulative returns can be plotted in percentage scale by enabling
pct_scale - Drawdowns can be plotted in absolute scale by disabling
pct_scale - Changed the default start value for all Numba-compiled return functions from 0 to 1
- Fixed
bm_closenot being resolved property in Portfolio.from_order_func - Switched parameter selection in @cv_split from
argminandargmaxtonanargminandnanargmaxrespectively - Symbols in Data can be tuples. Level name(s) can be provided via
level_name. - Fixed dynamic resolution of stop entry price
- Implemented product of indexes (cross_indexes) and product of DataFrames (BaseAccessor.cross, also as BaseAccessor.x)
- Implemented periodic datetime indexing through datetime components (DTC) and Numba functions. The new indexer class DTCIdxr can receive a datetime component or a sequence/slice of such and translate them into positions. Can also parse datetime-like strings such as "12:00", "thursday", or "january 1st" using
dateutil.parser. - Multiple indexers of the same type can be combined using logical operations, such as
vbt.autoidx(...) & vbt.autoidx(...). Both operands will be used as masks. - Indexers can be used to modify data. This feature is supported by BaseAccessor, which can modify the object in-place, for example with
obj.vbt.iloc[...] = .... - Added
rescale_toargument to PortfolioOptimizer.from_allocate_func and PortfolioOptimizer.from_optimize_func to rescale all weights generated by the callback to a given min-max range - Numba functions implementing crossovers can take the second array as a scalar or of any other flexible format
- Implemented a method (Data.realign) to align data to any frequency or custom index. By default, open price is aligned with GenericAccessor.realign_opening and any other feature is aligned with GenericAccessor.realign_closing. Useful when symbols are scattered across different times/timezones.
- Extended the @parameterized decorator with mono-chunks - parameter combinations can be split into chunks and each chunk can be merged to allow passing multiple parameter combinations as a single array and thus making use of column stacking. This approach means trading in more RAM for more performance. The function itself must be adapted to take arrays rather than single values.
- Added
warmupargument to execute to dry-run one function call - Fixed behavior of Portfolio.get_market_value when there's a column that consists entirely of NaN
- Data.to_csv and Data.to_hdf can take
path_or_bufandkeyas templates - Added
run_arg_dictto specify keyword arguments per function name in Data.run (as an alternative torun_func_dictwhere we specify function arguments per argument name) - Wrote the first part of Cookbook
Version 1.12.0 (25 Apr, 2023)¶
- Data.pull also accepts as the first argument a dictionary where symbols are keys and keyword arguments are values
- CCXTData also accepts symbols in the format
EXCHANGE:SYMBOL - Added support for broadcasting along one axis in broadcast
- Added support for wrapping multiple arrays with Param and broadcasting them in broadcast. Arrays of various shapes are stacked along columns and automatically expanded to the biggest shape by padding. Previously, only single-value parameters could be tested.
- Renamed the key "alpha_vantage" to "av" in settings.data
- Renamed
capturetocapture_ratioin ReturnsAccessor - Added
user_agentargument to TVClient - Fixed unstack_to_array producing incorrect mappings when sorting is disabled
- Added merging functions "reset_column_stack", "from_start_column_stack", and "from_end_column_stack"
- Fixed the error in Trades.plot_running_edge_ratio
- Implemented stop laddering in Portfolio.from_signals. By enabling
stop_ladder(or using an option from StopLadderMode), each stop type will behave like a ladder. That is, multiple ladder steps can be provided as an array (either one-dimensional, or two-dimensional with ladder steps defined per column). Moreover, ladders of different sizes can be wrapped with Param to test arbitrary combinations. Defining a ladder per row is not supported (this is possible only dynamically!). Ladders with a single value will behave like previously. - Implemented GenericSRAccessor.to_renko_ohlc to convert a Series into an OHLC DataFrame in the Renko format
- Parameters wrapped with Param have got a flag
hidefor the parameter to be hidden from the product index, and an argumentmap_templateto generate its values dynamically - With automatic initial cash and no expenses, the determined initial cash will become 1 instead of 0 for a correct calculation of returns
- Implemented market-neutral size types
MNTargetValue,MNTargetPercent, andMNTargetPercent100 - Method Splitter.take_range takes an argument
point_wisethat when True selects an index at a time and return a tuple of the selected values - Implemented classes for preparing functions and arguments. These classes take arguments, pre-process them, broadcast them, post-process them, substitute templates, prepare the target function, and build the target set of arguments. All these steps happen inside cached properties, such that none of them are executed twice. The result of a preparation can be easily changed and passed for execution: all portfolio methods accept as the first argument the preparer or its result. The following classes were implemented:
- BasePreparer: base class for vectorbtpro
- BasePFPreparer: base class for Portfolio
- FOPreparer: for preparing Portfolio.from_orders
- FSPreparer: for preparing Portfolio.from_signals
- FOFPreparer: for preparing Portfolio.from_order_func
- FDOFPreparer: for preparing Portfolio.from_def_order_func
- When providing keyword arguments, they are mostly merged with default keyword arguments to override them. But how to completely remove a key from the default keyword arguments? For this, pass unsetkey as the value of the key you want to remove!
- Complex VBT objects such as ArrayWrapper sometimes take other VBT objects such as Grouper as arguments. To change something deep inside the object, we have to manually and recursively rebuild the objects. For example:
wrapper.replace(grouper=wrapper.grouper.replace(group_by=True))to enable the grouping. To simplify this, the user can now passnested_=Trueand provide the specification as a nested dict. For example:wrapper.replace(grouper=dict(group_by=True), nested_=True). - Fixed
pivotsandmodesin PIVOTINFO - Added an argument
delistedto Data. Symbols with this argument enabled won't get updated. - Object in Splitter.take can be a template. For example, you can do
splitter.take(vbt.RepEval("x.iloc[range_]", context=dict(x=x)). - Switched to the newer paging feature in TVClient.search_symbols to download the complete set of symbols
- Completely refactored indexing. Instead of converting complex indexing specifications to integer positions in a single method
get_indices, each kind of indexing (also called "indexer") is implemented in its own class. For example, LabelIdxr is an indexer for labels while DatetimeIdxr is an indexer for datetime. The indexer AutoIdxr determines the kind automatically. Such indexers are generic and can be passed to RowIdxr to notify vecotrbtpro that we want to select a row, and ColIdxr to select a column. The indexer Idxr takes any of the above, or a row and column indexer. For example,vbt.Idxr(0)will select the first row and all columns, whilevbt.Idxr(0, "BTC-USD")will select the first row of the column with the label "BTC-USD" only.
Note
In your code, replace RowIdx to rowidx, ColIdx to colidx, RowPoints to pointidx, RowRanges to rangeidx, and ElemIdx to idx.
- Implemented a class IdxSetter, which takes a list of tuples with indices (see above) and values, creates a new array, and modifies it according to the list by setting each value at the corresponding index. Previously, this was implemented by
ArrayWrapper.fill_using_index_dict. Also, renamedfill_using_index_dictto ArrayWrapper.fill_and_set. - Implemented a class IdxSetterFactory, which can generate one to multiple IdxSetter instances from an index dictionary, Series, DataFrame, or a sequence of records. This makes possible passing the argument
recordsto any Portfolio method to unfold one to multiple arguments from it. - Portfolio.from_signals accepts new arguments
long_entriesandlong_exits, which can replace (or enhance)entriesandexitsrespectively - When any broadcastable argument in Portfolio is provided as an index dictionary, the default value is now taken from the global settings rather than hard-coded (apart from the argument
sizein Portfolio.from_orders, which is always NaN when a new array is created or an existing array is reindexed) - Added the argument
layoutto the methods of the plotly express accessor to change the layout of any figure - Removed the argument
date_parserfrom CSVData since it's incompatible with Pandas 2.0 - Added support for Pandas 2.0
- Updated the dark theme of the website and graphs
Version 1.11.1 (18 Mar, 2023)¶
- Fixed the staticization paths on Windows
- Renamed class methods of IndicatorFactory from
get_indicatorstolist_indicators - Renamed class methods of Data from
get_symbolstolist_symbols - Implemented various methods for a better indicator search:
- IndicatorFactory.list_vbt_indicators to list all vectorbt indicator names
- IndicatorFactory.list_indicators to list all supported indicator names
- IndicatorFactory.get_indicator and its shortcut
vbt.indicatorto find and return any indicator by its name
- Enhanced indicators by the following:
- IndicatorBase.param_defaults to get the parameter defaults of an indicator
- IndicatorBase.unpack to return the outputs of an indicator as a tuple
- IndicatorBase.to_dict to return the outputs of an indicator as a dict
- IndicatorBase.to_frame to return the outputs of an indicator as a DataFrame
- Refactored Data.run to use the new indicator search
- Providing
tzto CSVData will localize/convert each date to this timezone. Especially useful when data contains dates with various timezones (due to daylight savings, for example). - All record classes subclassing PriceRecords have received new fields describing the bar of the record. For example, you can now get the low price of the bar of each order using
pf.orders.bar_low. - Implemented MappedArray.to_readable, which is an equivalent to Records.records_readable
- Implemented Orders.price_status that returns whether the price of each order exceeds the bar's OHLC
- Fixed
call_seqbeing materialized even withattach_cal_seq=Falsein Portfolio.from_order_func - Added a progress bar to PathosEngine. Disabled by default, to enable pass
engine_config=dict(show_progress=True). - Implemented Ranges.crop to remove irrelevant OHLC data from records, which is especially useful for plotting high-granular data
Version 1.11.0 (15 Mar, 2023)¶
- Made
run_pipelinea class method of IndicatorBase so it can be overridden by the user - Implemented an accessor method GenericAccessor.groupby_transform to group and then transform a Series or DataFrame
- Implemented new trade properties and methods:
- Trades.best_price_idx
- Trades.worst_price_idx
- Trades.expanding_best_price
- Trades.expanding_worst_price
- Trades.expanding_mfe with
plot_and_returnsversions - Trades.expanding_mae with
plot_and_returnsversions
- Data classes that work with the file system and can discover file paths subclass a special data class FileData
- Moved RemoteData.from_csv and RemoteData.from_hdf to Data.from_csv and Data.from_hdf respectively
- Portfolio.from_signals supports new time-based stop orders that can be provided via two arguments:
td_stopfor timedelta-like inputs anddt_stopfor datetime-like inputs. Both are very similar tolimit_tifandlimit_expiryrespectively, with a distinction that a time-based stop is executed at a tick before the actual deadline. - Greatly improved the resolution of
dt_stopandlimit_expiry, which now can distinguish between timedelta and datetime strings. They can also take ISO time strings such as "16:00". - Renamed simulator functions:
simulate_from_orders_nb→from_orders_nbsimulate_from_signals_nb→from_signals_nbsimulate_from_signal_func_nb→from_signal_func_nbsimulate_nb→from_order_func_nbsimulate_row_wise_nb→from_order_func_rw_nbflex_simulate_nb→from_flex_order_func_nbflex_simulate_row_wise_nb→from_flex_order_func_rw_nb
- Arguments
order_func_nbandorder_argshave become optional in Portfolio.from_order_func. Also, the argumentflexiblehas been removed in favor offlex_order_func_nbandflex_order_args. - Integrated order mode (
order_mode=True) directly into Portfolio.from_signals. to make possible using target sizes instead of signals. Under the hood, it will use a dynamic signal function that translates orders into signals. - Class methods in SignalsAccessor that take
shapeas the first argument can also take an array wrapper via this argument - Scheduler supports zero_offset for seconds
- Previously, trailing target price was updated in accordance to a percentage stop. Now, it's updated in accordance to an absolute stop.
- SignalContext has a field
fill_pos_infoof the type trade_dt that holds information on the current position such as the average entry/exit price and open statistics - Fixed stop exit price provided as a floating array
- Each stop can define its own exit size and size type via
exit_sizeandexit_size_typerespectively. Can be used only dynamically. - Each stop can define its own ladder. By setting
ladder=Truein the information record, the stop won't be cleared after execution. Upon execution, the current ladder step index and row are automatically updated instepandstep_idxrespectively, such that they can be used to update the current step with new information. Can be used only dynamically. - Fixed the time resolution of stop orders among themselves. For example, previously, if an SL stop was marked for execution at the closing price and a TP stop was marked for execution at the opening price, the SL stop was executed (pessimistically). Now, the TP stop will be executed since it precedes the SL stop in time.
- Implemented TVClient.scan_symbols and the corresponding argument
marketin TVData.list_symbols to scan a market and return the full list of symbols being traded in that market - Important! To avoid confusion, passing an index (
pd.Index) to the broadcaster won't behave like a parameter (Param) any longer. An index will be converted into a Series with the same index and values.
Example
If you previously used pd.Index([0.1, 0.2]) to test multiple values, now use vbt.Param([0.1, 0.2]).
- Implemented a module (utils.cutting) that is specialized in "cutting" annotated sections from any code. Not only it can cut out a block of code, but also transform it based on special commands annotated using comments in that code. This way, there's no more need to duplicate a lengthy code since one can just copy, transform, and paste it to any Python file, in a fully automated fashion.
- Thanks to the cutting feature, Portfolio.from_signals and Portfolio.from_order_func can now be instructed to transform their non-cacheable code to a cacheable one! This process is called "staticization" and can be enabled simply by passing
staticized=True. For this to work, all Numba simulation functions that take callbacks as arguments were annotated beforehand - by meAfter extraction and transformation, the new cacheable code is saved to a Python file in the same directory (by default). Built-in callbacks will be automatically included in the file as well. User-defined callbacks are automatically imported at the beginning of the file, given that they were defined in a Python file and not in a Jupyter notebook.
Version 1.10.3 (26 Feb, 2023)¶
- Added
adjustmentargument to TVData - Disabled searching for the earliest date in CCXTData, you can still enable it by passing
find_earliest_date=True. Also,startandendare both None now since some exchanges don't like 0 as a start timestamp to be passed. - Added various helper functions for working with the file system in utils.path_
- Added
log_returns,daily_returns,daily_log_returnsin both Data and Portfolio - Scattergl is disabled by default since it causes problems when one trace has it enabled and another not. You can still enable
use_glglobally undervbt.settings.plotting. - Fixed a circular import error when attempting to load a configuration file with pickled data on startup
- Fixed size granularity not being properly applied due to round-off errors
- Fixed positions not being properly reversed when size granularity is enabled
- Portfolio can now take
sim_outas the second argument and extract order records and other arrays from it - Added
random_subsetargument to Param to select a random subset from a single parameter - Added Optuna as an optional dependency
- Fixed split and set selection in Splitter.take and Splitter.apply
- Fixed the calculation of the grouped gross exposure for both directions
- Fixed an issue with the infinite leverage and order fees greater than the available free cash
- Implemented NB and SP rolling z-score (GenericAccessor.rolling_zscore)
- Wrote Pairs trading
Version 1.10.2 (15 Feb, 2023)¶
- Fixed cases where size type is not being checked properly when resolving signals
Version 1.10.1 (15 Feb, 2023)¶
- Fixed cases where leverage is not being applied when reversing positions
- Timezones are now parsed with Pandas rather than
zoneinfoto avoid issues with Python 3.8
Version 1.10.0 (14 Feb, 2023)¶
- Added an argument
use_class_idsto represent classes by their ids rather than convert them into a stream of bytes when saving to or loading from a config file - Fixed the error "expected return exceeding the risk-free rate" in the newest PyPortfolioOpt
- Fixed timezone in TVData
- Renamed
AlphaVantageDatatoAVData - Made grid lines darker in the dark theme
- Outputs of Data.run can now directly return indicator outputs instead of indicator instances
- Price types "nextopen" and "nextclose" can now be provided per column
- Bar skipping in Portfolio.from_orders happens automatically
- Fixed the color schema in the Seaborn theme
- Added a signal function that can translate target size types into signals. Also, added an option
pf_methodin Portfolio.from_optimizer to rebalance using Portfolio.from_signals. - Added warnings whenever
init_positionwithoutinit_priceis provided - Made accessors wrap their own objects much faster
- Added a new execution engine PathosEngine that uses pathos pools with better pickling.
- Renamed
SequenceEngineand its identifier "sequence" toSerialEngineand "serial" respectively - Fixed resolution of asset classes in riskfolio_optimize after dropping NaN columns
- Entirely refactored importing mechanics. For instance, auto-import can disabled in the settings to load vectorbt in under one second. Also, whether to make an object available via
vbt.[object]is now decided by the module itself where the object is defined using__all__. This also greatly improves type hints in IDEs. - There are two new shortcuts:
vbt.PFforPortfoliovbt.PFOforPortfolioOptimizer
- Fixed best and worst price in Trades to also include the entry and exit price. This also improves the (running) edge ratio, MAE, and MFE metrics.
- Renamed the argument
stop_pricetoinit_priceincheck_stop_hit_nb - Merged the
OLSandOLSSindicators into justOLS - Datetime-like strings will now be parsed largely using Pandas than the
dateparserpackage - Made weekdays start from 0 in Numba methods by default (same as in Pandas)
- Added three new methods for more convenience (all support integer and timedelta lookback periods):
- GenericAccessor.ago - get value
nperiods ago - GenericAccessor.any_ago - whether a condition was True at any time in the last
nperiods - GenericAccessor.all_ago - whether a condition was True at all times in the last
nperiods
- GenericAccessor.ago - get value
- Simplified signal generation. There is no more need to manually set the signal in
out, returning its (relative) index is enough. Also, entry signal generators have got new optionsonly_onceandwaitfor full compatibility with exit signal generators. - Completely refactored RandomOHLCData and GBMOHLCData. There's no more need to explicitly specify the frequency for resampling, it will now default to 50 ticks per bar. Also, the number of ticks per bar can be specified as an array for best flexibility.
- Limit and stop values in Portfolio.from_signals can now be provided as a target price when
delta_format="target" - Greatly improved approximation of order values for automatic call sequences
- BCO and Param have been split to become totally independent of each other
- Dropped Python 3.7 support
- Default timezone can be now set using
tzwhen fetching data, this will do two things: 1) everystartandenddate provided as a string without timezone will be assumed to be in this default timezone, and 2) output data will be converted to this timezone. - Timeframe will be converted into frequency and persisted in the wrapper when fetching data. This way, if the fetched data has gaps, vectorbt objects such as portfolio will still know the true frequency.
- There's no more need to build a resampler to resample to a target index: passing the target index directly will work too
- Renamed
fill_statetosave_state,fill_returnstosave_returns, and added an argumentsave_valueto save the portfolio value in Portfolio.from_orders and Portfolio.from_signals - Refactored parameter generation. Function combine_params can now be used to generate parameters without building the index. Also, filtering of conditional parameters and selecting a random subset have become much faster (especially when used together).
- Renamed
entry_place_functoentry_place_func_nbandexit_place_functoexit_place_func_nb - Function execute can now put multiple calls into the same chunk using
distribute="chunks", execute them serially, and distribute the chunks themselves. This makes certain engines such as those for multiprocessing much more effective. Also,taskscan be provided as a template that gets substituted once chunking metadata is available (requiressize). - Reduced memory footprint when indicators are executing many parameter combinations
- Refactored the jitted loop in the indicator factory. Multiple parameter combinations can be put into a single jitted loop, while multiple jitted loops can be distributed using execute. The argument
jitted_loopcan now be toggled when running an indicator. There's a new argumentjitted_warmupto run one parameter combination to compile everything before running everything else. - Datetime is now mainly parsed using
pd.Timestampinstead ofpd.to_datetime - Improved parsing of datetime-like and timezone-like objects
- Implemented a function pdir that beautifully prints the attributes of any Python object (including packages and modules!). Use this function in conjunction with
phelpandpprintto discover objects. - Added an option
dropnawhen calculating crossovers to ignore NaNs (default is to keep them) - Enhanced BaseAccessor.eval with multiline expressions
- Added support for a variety of pickle compression algorithms. Each algorithm has its own set of file extensions that can be automatically recognized to decompress the saved data. For example, pickling with the compression
bloscwill add the extension.pickle.blosc. - Added the ability to select a date range with
startandendin local data classes. This can be done before loading data in HDFData (efficient) and only after loading data in CSVData (not so). - The default PyTables format is now "table"
- YFData now automatically recognizes the timezone of each ticker and uses it. If tickers have different timezones, either fetch them separately, or provide a unified timezone with
tz_convert. - Not only vectorbt will check whether some optional packages are installed, but it will also verify whether their versions match the required ones for full compatibility
- Renamed
deep_substitutetosubstitute_templates - Iterating over symbols when fetching or updating data is now done with execute such that it can be distributed. For example, using
execute_kwargs=dict(cooldown=1)will sleep for one second after fetching each symbol since it's now executed with SerialEngine. - Updated Features with dozens of new examples
Version 1.9.0 (15 Jan, 2023)¶
- Implemented leverage. Two leverage modes are supported:
Lazy: Uses the available cash whenever possible, and only if it's not enough to fulfill the whole operation, uses leverage. This allows for effectively increasing the available cash with debt.
Eager: Uses leverage even if the whole operation can be fulfilled using the available cash. This is similar to opening a micro-account with a subset of the available cash and using the lazy leverage on it.
- Leverage can also be infinite (
leverage=np.inf) to fulfill any position size requirement as long as there's at least some free cash. The user is the one responsible for choosing a proper position size. - Refactored the lowest-level simulation logic:
- Added AccountState.locked_cash that together with AccountState.debt keep track of the current leverage and shorting state. Used both in the long and short direction. Don't forget to initialize it with zeros and add to AccountState and ExecState.
- Buy and sell functionality is now distributed over four distinct functions (neither of them supports position reversal):
- The functions buy_nb and sell_nb orchestrate them to enable position reversal
- Regular cash (AccountState.cash) won't be used in buy and sell operations anymore, only in equity calculations. This is because it can go below zero when leverage is used. Transactions now use solely free cash (AccountState.free_cash). This also means that free cash (and not regular one) must be greater than zero to allow most operations.
- Free cash isn't guaranteed to be zero or above. Some operations such that those partially closing leverage positions or short positions can make it become negative, thus offsetting costs to other columns.
- The order in which account/execution state and order result are returned has been reversed (that is,
order_result, new_account_stateinstead ofnew_account_state, order_result) - Renamed and reordered log fields
- When reversing positions, SizeType.Percent now refers to the available resources after the current position is closed (that is, in a long position that can be reversed, 50% means close the current position and open a short one by using 50% of the available free cash left after closing it)
- The limitation of Portfolio.from_signals being unable to reverse positions when SizeType is
PercentorPercent100has been lifted - Some simulation functions accept an argument
fill_statethat can fill position, cash, debt, and other fields from AccountState as arrays to avoid reconstructing them later
Note
Reconstruction of free cash would only yield the same result as during the simulation if leverage is disabled. For enabled leverage, use fill_state to pre-compute the array.
- Returns for negative positions won't be flipped anymore to make them consistent with
pct_change. This will also make equity and cumulative returns produce the same plot when equity dips below zero. - Refactored flexible indexing due to an unresolved Numba bug:
- There's no more
flex_select_auto_nb, but an entire module base.flex_indexing with Numba functions tailored to different array formats - Removed
flex_2dargument completely - It's not more required to wrap scalars with zero-dimensional NumPy arrays - most Numba functions will convert any input into a one-dimensional or two-dimensional NumPy array. Here are the general rules depending on the annotation of an argument:
FlexArray1dLike: can be passed as scalar or array that can broadcast to one dimensionFlexArray2dLike: can be passed as scalar or array that can broadcast to two dimensionsFlexArray1d: must be passed as a flexible one-dimensional arrayFlexArray2d: must be passed as a flexible two-dimensional array
- There's no more
- Important! From now on, one-dimensional arrays will always broadcast against rows since we're primarily working on time-series data. This makes vectorbt's broadcasting mechanism different from the NumPy's mechanism! Created a range of helper functions for broadcasting both arrays and shapes in the module base.reshaping. Also ensured that all vectorbt functions use in-house broadcasting functions, and not NumPy ones.
Example
Previously, if a list ["longonly", "shortonly", "both"] was applied per column and could be used to test multiple position directions, now the same list will be applied per row, thus use [["longonly", "shortonly", "both"]]
- Made data typing errors more descriptive in portfolio
- Splitter.from_rolling has now an option to roll backwards
- Fixed Resampler.map_bounds_to_source_ranges returning garbage values when
skip_not_found=True - Renamed the argument
to_py_timezonetoto_fixed_offset - Series names are automatically stringified in QSAdapter
- The author of tvdatafeed has pulled the package from PyPi such that it cannot be installed with
pipanymore, thus the package has been integrated directly into vectorbt (see data.custom.tv) and additionally refactored - The TradingView client TVClient now uses the pro data feed by default, making it possible to pull 20,000 data points at once, even without signing in. If you want to sign in, you can use username and password, or manually parse the token from the TradingView's website and use it (for example, if proper authentication requires you to complete a captcha).
- Implemented functions pprint and phelp that do
printandhelpbut beautifully - Earliest date detection in CCXTData was (at least somewhat) fixed for exchanges that return the latest data point and not the earliest one when
limit=1is used - Calculating and plotting allocations doesn't require grouping
- Input arrays in indicators built with IndicatorFactory aren't cacheable by default to enable pickling of indicator class definitions
- The argument
order_recordsnow comes before the argumentclosein the constructor method of the Portfolio. Make sure to use keyword arguments to avoid problems! - MappedArray.to_pd can take a reducing function or its name to automatically aggregate data points that fall into the same row. This is especially convenient for resampling P&L where
reduce_func_nb="sum"can be provided. - Package python-telegram-bot for building Telegram bots has released the version 2.0.0, which introduces a lot of breaking changes. Even though the vectorbt's adaptation to this version was successful, the bot cannot run in background due to
asyncio(if you know the fix, ping me!). If you need this functionality, install the latestpython-telegram-botversion before 2.0.0 and vectorbt will switch to the previous functionality automatically. The tutorial still requires the previous version until the limitation is fixed. - Fixed errors when no symbol could be fetched in custom data classes
- Refactored configs and pickling (persisting on disk):
- Config options cannot longer be passed directly to config, they need to be packed as
options_ - Most vectorbt objects can now be pickled and unpickled using
pickleordill, without the need to prepare them. The mechanism is rather simple: each object defines its own reconstruction state of the type RecState, which is a tuple of positional arguments, keyword arguments, and attributes required to create an identical instance. Only this state is pickled, unpickled, and together with the class used to reconstruct the object using reconstruct. This allows for highest flexibility because vectorbt objects can now be saved in lists, dicts, and even as values of DataFrames! - No more need to pickle class definitions: only the path to the class is pickled, thus making dumps more light-weight but also more robust to API changes
- There's a new functionality that allows the user to change how a pickled object is reconstructed in a case where its class or arguments have changed
- Config options cannot longer be passed directly to config, they need to be packed as
Warning
If you have any vectorbt objects pickled by the previous versions of vectorbt, they won't be unpickled by the newer version. It's advised to either recalculate them using the newer version, or first unpickle them using the previous version, save their components separately, and then import and connect them using the newer version. Ping me if you meet any issues.
- Not only vectorbt objects can be seamlessly converted into a stream of bytes (i.e., pickled), but they now also be converted into regular configuration files with extensions
.config,.cfg, and.ini! For this, vectorbt extends theconfigparserfunctionality with its own parsing rules (see Pickleable.encode_config and Pickleable.decode_config). Since most vectorbt objects can be represented using dictionaries, configuration files are an ideal text format where sections represent (sub-) dictionaries and key-value pairs represent dictionary items. In addition, the following features were implemented:- Parsing of literals (strings
"True"and"100.0"are recognized as a bool and float respectively) - Nested dictionaries (section
[a.b]becomes a sub-dict of the section[a]) - References (a key can reference another value in another section or even the section itself, such that a reference
&a.bwill be substituted for the value of the keybin the dicta) - Single and multi-line expressions (
yf_data = !vbt.YFData.pull("BTC-USD")will download the data and put it under the keyyf_data) - Round trip (a successful round-trip consists of converting an object to text and then back again to an object without losing information)
- Parsing of literals (strings
- Upon importing, vectorbt can automatically detect a file with the name "vbt" and any supported extension in the current working directory, and update the settings. This makes setting the default theme like a breeze: