Visualizing Trading Strategies that Consistently Outperform the Stock Market

In this short paper, I discuss two topics. First, strategies to trade the S&P 500 index with few trades over long time periods, offering best exit, entry and re-entry points during the journey, to beat the baseline return. The baseline consists of staying long the whole time. The dataset has 40 years’ worth of daily prices. Then, and perhaps most importantly, how the underlying algorithms lead to original optimization techniques applicable to most AI problems, including those solved with deep neural networks.

Original trading strategies

It is difficult to successfully arbitrage the stock market due to the large number of participants competing against you. Staying long on the S&P 500 index is one of the most effective strategies, even outperforming many if not most professional traders, in the long run. In order to do better while minimizing competition, you need to use strategies that Wall Street professionals must avoid. For instance, keeping cash for extended periods of time (sometimes for years in a row) to be able to jump in at the right time with no advance notice – after a massive crash – then buy and sell during a short window following the crash to leverage the resulting volatility, to finally have a stable long position acquired at a steeply discounted price. You then sell the position in question years or months later when its value has massively increased following the slow or fast rebound. Then you repeat the cycle.

How to better optimize any AI algorithm

While the topic is about trading strategies that work, the most interesting part is about AI optimization techniques. Each dot in the figure represents the average performance of a trading strategy over different time periods. It ranges from great (blue, green) to no better than baseline (grey) and finally to poor performance (orange and red). Each strategy is defined by 3 parameters: two of them with values displayed on the X and Y axes, with the third one displayed in a different slice: the left and right scatterplots correspond to two different values of the third parameters.

There are obvious areas of good and bad performance in the parameter space. The idea is to find stable, good parameter sets, not too close to a red dot to avoid overfitting. This is similar to identifying the best hyperparameters in a deep neural network, using techniques such as smart grid search or boundary detection. There is no reason to look for a global optimum. It would be time-consuming, and the gain would be minimal. The same philosophy applies to gradient descent in neural networks, where over-optimizing leads to “getting stuck” (vanishing gradient). Note that in this case, the implicit loss function is somewhat chaotic — in particular, nowhere continuous — though gentle enough to lead to valuable results. You may use my math-free gradient descent algorithm featured here, suitable for pure data (no loss function), to solve this problem.

Get the paper, dataset, and Python code

The technical document is available on GitHub: see paper 47, here. It features detailed documentation with illustrations, 40 years’ worth of daily historical data for the S&P 500 index, and the code. With one-click links to GitHub. For direct access to GitHub, follow this link and look for documents starting with spx500 in the name. It is also included in my book “Building Disruptive AI & LLM Technology from Scratch”, available here.

To not miss future versions with more features, subscribe to my newsletter, here.

About the Author

Towards Better GenAI: 5 Major Issues, and How to Fix Them

Vincent Granville is a pioneering GenAI scientist and machine learning expert, co-founder of Data Science Central (acquired by a publicly traded company in 2020), Chief AI Scientist at MLTechniques.com and GenAItechLab.com, former VC-funded executive, author (Elsevier) and patent owner — one related to LLM. Vincent’s past corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET. Follow Vincent on LinkedIn.

 

Leave a Reply

Discover more from NextGen AI Technology

Subscribe now to keep reading and get access to the full archive.

Continue reading