How We Forecast

The Math and the Theory Behind the CycleIntelligence Forecasts
It has long been recognized that various aspects of the stock market and global economies cycle from strength to weakness and back over time. Some segments of the market are actually categorized as "cyclical". Seasonal cyclicality is observable in markets with some sectors (especially agriculturerelated) exhibiting more seasonality than others.
What is not widely known is an analysis of cycles of time in conjunction with pricing trends, tops and bottoms in individual equities, indexes and other financial instruments, can often provide a highly correlated forecast and increase the probability of predicting future trends, tops and bottoms.
The exact process used by CycleIntelligence is proprietary and highly protected intellectual property. However, it is important to understand the depth and significance of the math and algorithmic theories that are used in the CycleIntelligence forecasting application.
The Theory
CycleIntelligence forecasts are produced from mathematical models predicated on two major assumptions: 1. Historical pricing data can be explicitly replicated through a combination of sinusoidal waves (Fourier  1807). In the picture to the right, this concept is visually represented by demonstrating that the combination of 4 sinusoidal waves explicitly represents the historical pricing chart shown at the bottom of the Figure. This means that a mathematically derived series of sine and cosine waves of varying wave lengths and amplitudes can be combined such that the resultant or composite ‘wave’ or curve precisely matches historical pricing data; to the penny. This means the composite wave replicates historical trends, tops and bottoms. With each forecast, the CycleIntelligence forecasting model performs this mathematical alignment process where a composite wave, composed of literally hundreds of thousands of sinusoidal waves is constructed.
2. Unfortunately, making the assumption that if the historical data can be replicated by a complex combination of an infinite series of sinusoidal waves, the resultant composite wave does not always produce a reliable forecast of future data. However, buried within the sinusoidal waves are certain waves that do tend to be highly predictive. Separating predictive from nonpredictive waves is the key to successful forecasting. Within the many thousands of sinusoidal waves that are used to define the historical pricing trends, there are some that merely represent noise; some represent exogenous events; some represent random periodicity of data; and some represent truly predictive cycles. The key, therefore, is to eliminate all waves that are not predictive and then develop forecasts only from those waves that are deemed to be mathematically predictive.
The CycleIntelligence algorithms analyze the historical data, finding only those cycles that tend to be highly predictive within reasonable periods of future time. These are called “Super Cycles”. When the CycleIntelligence programs analyze a financial instrument (e.g., equity, commodity, currency, etc.), sufficient historical data is needed to find these predictive Super Cycles. Given a sufficient amount of historical data, the Super Cycles are found and then combined into a composite forecast ‘wave’ that is used to predict future trends, turns (tops/bottoms) and price.
As with all predictive models, future oneoff (exogenous) events cannot always be predicted. The longer the forecast, in terms of time, the more likely exogenous events will occur and, as such, either skew the pricing trend higher or lower, depending upon the kind, severity and frequency of the exogenous events.
The Data
The two data inputs used in the development of a forecast study are:
A forecast study involves a complex analysis of the changes in price over time and the periodicity, or degree of repetitiveness, in the data. Even though the inputs are simple to define, it is how these two elements (time and price) are analyzed, that determines how accurate a forecast study can be performed. Although time is infinite, pricing data are not. As such, the algorithms must have a minimum amount of pricing data to provide a reasonable forecast. In general, the more pricing data, the better. The recommended minimum number of historical daily closing prices is six years. However, studies can be run on data sets of less than six years, but the accuracy of the forecasts can be impaired with less than the recommended amount of historical data.
The Methodology
Forecasts can be produced with as little as 2 years of historical daily closing prices. The more reliable forecasts tend to come from data with at least 6 years of historical daily closing prices.
Forecasts begin with the most recent closing price and forecast the future trends, turns and pricing levels. The forecast produced is for the next 90 calendar days. The algorithm can produce forecasts infinitely into the future, but as stated above, the further into the future the forecast, the more exogenous events will occur, resulting in a degradation of the forecast.
It is recommended that forecasts be regenerated with a frequency of one calendar week between regenerations.
The Cone of Accuracy
One of the many components of the CycleIntelligence algorithms is the calculation of the impact of volatility of the equity over time. This calculation is used to determine one standard deviation of volatility throughout the forecast. This one standard deviation of historical volatility, which is called the equity's "Expected Move", or "EM", is included with each forecast and is represented by a band of +0.33 EM and 0.33 EM along the composite forecast. This band slightly increases over time and is called the forecast "Cone of Accuracy". Performance results based on the Cone of Accuracy are provided when a historical outofsample forecast is performed.
