According to data from 21e6 Capital, 13% of cryptocurrency hedge funds have closed their doors so far in 2023, while the number of new ones starting up has decreased. The trend is inconsistent with how such funds have historically performed, which has done well when the value of crypto assets rises. For instance, Bitcoin‘s value against the dollar has increased by about 75% since the start of the year.
Bitcoin Surges Enormously, but 13% of Crypto Hedge Funds Sadly Closed Before the Year Ends
The Swiss company found that when benchmarked against Bitcoin, directed funds have particularly underperformed. Directional funds use tactics that are based on expected market changes. In comparison to their non-directional counterparts, they rely more on futures markets and favor betting on transient price swings. Directional trading tactics can be unsuccessful when long-term trends fail in the crypto market.
The Quantitative Directed Approach Was the Most Unsuccessful
The quantitative directional method, one of the investing strategies employed by hedge funds, has shown to be the least successful as of 2023. This method typically employs trading algorithms and focuses on statistical decision-making.
Such data-based tactics brought directional funds issues in a year marked by bumpy markets, as noted in the 21e6 Capital study. In other words, although cryptocurrency prices have increased, they haven’t done so in a consistent manner. As a result, trend-following tactics have less predictive power, and trading algorithms receive erroneous signals.
Artificial Intelligence-Based Content Increased
The current rise in the volume of AI-generated information online presents another difficulty for algorithmic trading strategies. To protect the precision and efficacy of their algorithms, quant funds are changing the way they collect data. This occurs at a time when false information generated by AI is becoming more common.
In actuality, the spread of AI-generated material poses a threat to more than simply trading algorithms. Research has revealed that when input data is tainted by intentionally generated content, the output is adversely impacted across several families of machine learning models.
Leave a comment