Skip to content

Commit

Permalink
Merge pull request CamDavidsonPilon#337 from lnauta/master
Browse files Browse the repository at this point in the history
small fixes: markdown, pep8 and poisson distribution
  • Loading branch information
CamDavidsonPilon authored Jan 25, 2017
2 parents 0f48e21 + 20a5fcd commit 617a181
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@
"### Discrete Case\n",
"If $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n",
"\n",
"$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n",
"$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots, \\; \\; \\lambda \\in \\mathbb{R}_{>0} $$\n",
"\n",
"$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n",
"\n",
Expand Down
16 changes: 8 additions & 8 deletions Chapter6_Priorities/Ch6_Priors_PyMC2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -325,7 +325,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#####Example: Bayesian Multi-Armed Bandits\n",
"##### Example: Bayesian Multi-Armed Bandits\n",
"*Adapted from an example by Ted Dunning of MapR Technologies*\n",
"\n",
"> Suppose you are faced with $N$ slot machines (colourfully called multi-armed bandits). Each bandit has an unknown probability of distributing a prize (assume for now the prizes are the same for each bandit, only the probabilities differ). Some bandits are very generous, others not so much. Of course, you don't know what these probabilities are. By only choosing one bandit per round, our task is devise a strategy to maximize our winnings.\n",
Expand Down Expand Up @@ -706,7 +706,8 @@
"outputs": [],
"source": [
"figsize(12.5, 5)\n",
"from other_strats import *\n",
"from other_strats import GeneralBanditStrat, bayesian_bandit_choice, max_mean, lower_credible_choice, \\\n",
" upper_credible_choice, random_choice, ucb_bayes, Bandits\n",
"\n",
"# define a harder problem\n",
"hidden_prob = np.array([0.15, 0.2, 0.1, 0.05])\n",
Expand Down Expand Up @@ -996,7 +997,7 @@
"\n",
"Historically, the expected return has been estimated by using the sample mean. This is a bad idea. As mentioned, the sample mean of a small sized dataset has enormous potential to be very wrong (again, see Chapter 4 for full details). Thus Bayesian inference is the correct procedure here, since we are able to see our uncertainty along with probable values.\n",
"\n",
"For this exercise, we will be examining the daily returns of the AAPL, GOOG, MSFT and AMZN. Before we pull in the data, suppose we ask our a stock fund manager (an expert in finance, but see [10] ), \n",
"For this exercise, we will be examining the daily returns of the AAPL, GOOG, TSLA and AMZN. Before we pull in the data, suppose we ask our a stock fund manager (an expert in finance, but see [10] ), \n",
"\n",
"> What do you think the return profile looks like for each of these companies?\n",
"\n",
Expand Down Expand Up @@ -1524,7 +1525,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"##Effect of the prior as $N$ increases\n",
"## Effect of the prior as $N$ increases\n",
"\n",
"In the first chapter, I proposed that as the amount of observations, or data, that we posses, the less the prior matters. This is intuitive. After all, our prior is based on previous information, and eventually enough new information will shadow our previous information's value. The smothering of the prior by enough data is also helpful: if our prior is significantly wrong, then the self-correcting nature of the data will present to us a *less wrong*, and eventually *correct*, posterior. \n",
"\n",
Expand Down Expand Up @@ -1828,8 +1829,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#Bayesian Rugby#\n",
"# Bayesian Rugby\n",
"Note: This submission comes from Peadar Coyle and is our first 'guest' example. \n",
"Peadar is known as @springcoil on Twitter and is an Irish Data Scientist with a Mathematical focus, he is currently based in Luxembourg. \n",
"I came across the following blog post on http://danielweitzenfeld.github.io/passtheroc/blog/2014/10/28/bayes-premier-league/ \n",
Expand All @@ -1842,7 +1842,7 @@
"\n",
"Since I am a rugby fan I decide to apply the results of the paper [Bayesian Football](http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0CC8QFjAC&url=http%3A%2F%2Fwww.statistica.it%2Fgianluca%2FResearch%2FBaioBlangiardo.pdf&ei=0m3aVKK2KMm6UarSgYgM&usg=AFQjCNGiEg26H58zDiEIx3C7diUzfq3bJQ&sig2=yICsOBSJBniJNzlLW-H86g&bvm=bv.85464276,d.d24) to the Six Nations.\n",
"\n",
"##Acquiring the data##\n",
"## Acquiring the data\n",
"The first step was to acquire the data - which I created in a csv file from data I got on wikipedia and sports websites. To be honest a lot of this turned out to be manual entry. But this is fine for T=6 teams :) \n",
"\n",
"We largely follow the code of the website cited above, with only a few small changes. We do less wrangling because I personally curated the data. \n",
Expand Down Expand Up @@ -2094,7 +2094,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#The model. #\n",
"# The model\n",
"The league is made up by a total of T= 6 teams, playing each other once \n",
"in a season. We indicate the number of points scored by the home and the away team in the g-th game of the season (15 games) as $y_{g1}$ and $y_{g2}$ respectively. \n",
"\n",
Expand Down

0 comments on commit 617a181

Please sign in to comment.