While the Gambler's Fallacy discredits the likelihood of future random events based on past random events, we are able to predict non-random events with varying degrees of accuracy. Before we begin it is important to understand the conditional probability. Unlike the Gamblers Fallacy, the conditional probability is concerned with events that have a likelihood of occurrence based on other events.
For example, the likelihood of a coin landing on heads after nine consecutive heads is the same as if the previous nine events did not occur. However, the chances of getting a king in a poker hand go up as more non kings are dealt. That is if there are kings in the deck.
This is where Bayes Theorem comes in. It answers the question, What is the likelihood of this event given another related event occurred? It works by constantly being updated based upon new evidence or information. The formula is as follows:
P(A) is the prior probability and it represents the probability of A occurring. P(A|B) is the conditional probability of A given that B occurs. However, it assumes that A is not independent of B. Therefore P(B|A) is the conditional probability of B given that A occurs. Then P(B) is the probability of B occurring. While simple, this formula is the basis for Googles search engine and the weapon of choice against email Spam.
For those unfamiliar with the equation, it can be a bit unintuitive. For example, 1% of women at age forty who have mammograms have breast cancer. 80% of women with breast cancer get positive mammograms. 9.6% of women without breast cancer will also get a positive mammogram. If a woman gets a positive mammogram, what is the probability she actually has breast cancer?
To solve this, we must plug in the facts to our Bayes Theorem. First, 1% of women in the group have breast cancer. Therefore P(A) = 0.01. Next, 80% of those women will get a positive mammogram and 9.6% of women without breast cancer get a positive mammogram too. Therefore, P(B) = 0.8 P(A) + 0.096 (1 - P(A)) = 0.008 + 0.09504 to give 0.10304. Thus, P(B|A) = 0.8 since 80% of women with breast cancer get positive mammograms. Using the theorem we have P(A|B) = 0.8 * 0.01 / 0.10304. This equals 0.0776 or a 7.8% chance of actually having breast cancer given a positive mammogram.
However, we want to apply this to finance. So lets look at how market sentiment affects a stock price. Say that out of 2000 samples, we find that 1000 are sad and 1000 are happy. When the sentiment is sad we notice 800 occurrences of price decrease and 200 of a increase. When the sentiment is happy we notice just 50 occurrences of a price decrease and 950 with a price increase.
The variables are as follows: P(increase) = the probability of the stock price increasing P(decrease) = the probability of the stock price decreasing P(happy) = the probability of the sentiment being happy P(sad) = the probability of the sentiment being sad To find out the probability of a price increase given a happy sentiment we calculate: P(increase | happy) = P(increase) P(happy | increase) / P(increase) P(increase | happy) = (1150 / 2000) (950 / 1150) / (1000 / 2000) P(increase | happy) = (0.575)(0.826) / 0.5 P(increase | happy) = 0.9499 or 95%
While totally fictitious, this example would point to a strong likelihood of a price increase given a happy sentiment. In more realistic terms, the sentiment would be a quantified value and more involved to calculate. However, the goal here is to introduce a new concept that can later be expanded on for financial modeling.
With the example above, as we get more data, our values will change with the results becoming more accurate over time. So if we have only estimates and not exact probabilites, we can still use Bayes Theorem to help fill in the blanks with more accurate values as we calculate the theorem.
While Bayes Theorem is a powerful tool, it is only part of a financial modeling solution. Before implementing into a trading environment, one needs to take the time to learn more through careful experimentation and study.