(aka. “America’s Next Top Model”)
If you have been watching the U.S. Coronavirus Task Force briefings, you know that Dr. Birx (“Debbie” LOL) is emerging as a key player. In case you missed yesterday’s, watch this for a minute or two:
Apparently, our experts were unhappy with the model from Imperial College, so they started over and created their own. Then they found a group at the University of Washington’s IHME producing identical results.
Here is that model. Select your own state from the drop-down menu. They intend to keep it up-to-date as “facts on the ground” emerge.
The Frequently Asked Questions are worth a read. For example:
Does this show the effect of social distancing and other measures?
The model includes social distancing, and assumes continued social distancing until the end of May 2020. In states that do not currently have social distancing in place, we have assumed that they will put it in place within seven days. If they do not, then the estimates for the number of deaths and the burden on their hospital systems will reflect this and will go up.
I am not an expert, but it seems to me that widespread testing to enable contact tracing could affect these curves (cf. South Korea). As could a readily available treatment if it can keep people out of the hospital. But from where we are right now, today, these are probably the best projections you will find. And right or wrong, they are guiding policy decisions at the federal level.
First, the bad news. Here is an epidemiological paper from Imperial College London that has been making the rounds. They base their conclusions on their models, which they claim are robust against modest changes in their assumptions. See also this profile of the principal author.
They say if we take sufficiently draconian measures to get the reproductive rate below 1, we have to do so nation-wide and for at least two months. And then after we relax those measures, we will have a nearly identical epidemic a few months later, because our population will still have almost no immunity. (They recommend paying close attention to what happens next in China and S. Korea.)
If we just “flatten the curve” with moderate measures, there will still be enough cases to overwhelm our health care system and result in hundreds of thousands of deaths.
The middle option is to institute draconian measures whenever a certain number of ICU beds fill up, and then relax them when those beds become free again. This would result in a series of manageable peaks, with draconian measures in place around half the time.
This is pretty much where we are until the summer or fall of 2021.
That is my interpretation, anyway. Read it for yourself.
The reason for this dire conclusion is the lack of any effective treatment or vaccine. New pharmaceuticals take time, first because you have to make sure they are safe and second because you have to ramp mass production. And that’s without even considering long-term side effects; e.g. what if remdesivir causes cancer?
Enter the good news.
HCQ is hydroxychrloroquine, aka. Plaquenil. It is an anti-malarial that has been around for more than 50 years, and early results both in the lab and in people suggest it is effective against COVID-19. It is impossible to overstate the importance if this is true.
Make no mistake; this is a serious drug with unpleasant side effects. But those side effects are largely temporary. Most importantly, we know what they are (even long term), we know their incidence, we know how to mitigate them, and we know all of the risk factors.
We will avoid disaster if we can prevent the ICUs from filling up. If there really is an existing, widely available drug that is even 70% effective, this whole thing is going to be over in a few weeks.
The derivative of an exponential is an exponential. So when the graph of total cases has the same shape as the graph of its day-over-day change (first derivative), and the same shape as the increase in day-over-day change (second derivative), and so on… We are in the exponential phase.
Click here for worldwide numbers. Click on a country for its details. The most interesting chart is “Daily New Cases”. When that levels off, the second derivative is zero and you are looking at an inflection point. But do remember the data are noisy, and also testing capacity can be increasing or saturated. Still this is the most informative chart.
Speaking of testing capacity increasing, click here for U.S. numbers. The reporting in the U.S. is almost entirely state-specific, so these folks are providing a useful service simply by adding it all up.
The U.S. tested over 22,000 people yesterday.
I have heard and read some people saying that testing for the coronavirus is not very important. They usually seem to have a physician friend who told them something like this: “I do not need to test anyone because the results are not actionable. We have no pharmaceuticals, and we are not going to intubate someone before they even have symptoms, so the test result would change nobody’s behavior. All it would do is provide some peace of mind, maybe.”
For most diseases this would be correct, of course. I want to explain why, for a pandemic, it could not be more wrong. Executive summary: Your test is not for you; it’s for everyone else.
South Korea appears to be getting their epidemic under control. They are doing it without welding apartments shut or arresting people for walking around without a mask.
What they have done, as of this writing, is to test over 250,000 people, and they are adding another 20,000 per day. For comparison, the U.S. has currently tested fewer than 5,000 total. Yes, this does imply the case numbers in the U.S. are utter fiction.
When someone tests positive, they usually self-isolate. Almost nobody wants to get others sick (notwithstanding the occasional apocalyptic death cult). Then the authorities track down their close contacts and encourage them to get tested, too.
Recall that this thing can spread without symptoms. So tell me, if you learned that a close colleague had tested positive, would you want to get tested? Would you change your behavior toward a live-in elderly relative?
South Korea is also publishing aggregate results. If there is an outbreak in a region, you do not need to cancel the concerts and soccer games there; people will do that for themselves. Almost nobody wants to get sick.
If you search the news for “South Korea” and “coronavirus”, you will find many articles about this.
Now let’s talk about the U.S. The public venue closures are too many to count now. But look up for a minute and think about the future. How does a county in rural Kansas decide when to close its schools? How does San Francisco decide when to reopen theirs? Most of these closures are nominally until March 31 or April 15. What then? It’s not going away any time soon.
Effective interventions are targeted. Targeted interventions are based on data. We are late starting this race, and at the moment, we are still running it blind. We need widespread testing to inform both personal and policy response. The test does not even need to be very accurate to let us gauge trends and focus efforts, assuming it is performed widely enough.
Exponential growth swings both ways. Small interventions early have outsized effects later. At this stage, testing is not just an important thing; it is the most important thing.
Perhaps the most famous proponent of the Kelly Criterion is Edward Thorp. He founded the M.I.T. Blackjack Club, published various papers on gambling and investing, and became both a professor of mathematics and a billionaire investor. The Kelly Criterion played a key role in most of these; he dubbed it “Fortune’s Formula”.
Thorp has authored various articles about the Kelly Criterion over the years, e.g. The Kelly Criterion and the Stock Market. These typically list six properties related to the Kelly formula which I will now attempt to paraphrase:
- If your expected compound rate of growth is positive, your wealth will approach \(\infty\) over time
- If your expected compound rate of growth is negative, your wealth will approach zero over time
- If your expected compound rate of growth is zero, your wealth will approach both \(\infty\) and zero (i.e. make arbitrarily wide swings) over time
- The ratio between the performance of the Kelly strategy and that of any other strategy will approach \(\infty\) over time
- The expected time to reach any fixed target wealth is shorter for Kelly than for any other strategy
- To maximize your expected rate of growth over many rounds, you can simply maximize the expected logarithm of your wealth each round, even if the exact probabilities and payoffs change from round to round
At least a couple of these results were first established by Thorp himself in the 60s.
To reiterate the context: We assume you have some “edge” in gambling or investing, and you are going to make a large sequence of bets/investments using that edge, compounding your results over time. These properties — and the Kelly formula itself — are about your strategy for sizing each bet. (If you have no edge, you should not be making bets in the first place.)
Properties (1), (2), and (3) say you do not have to use the Kelly formula to do (very) well; smaller or even somewhat larger bets will work fine. But be careful not to make your bets too large or you are very surely going to do (very) poorly.
Property (4) is essentially the one I have mentioned already: As the number of bets goes up, Kelly is increasingly likely to outperform any other strategy, and that outperformance is likely to grow toward \(\infty\) over time.
Property (5) says Kelly bets are the fastest expected way to reach a betting/investment target.
Property (6) says it is valid to apply the Kelly Criterion to situations more complex than (e.g.) my little toy example with the dice.
Other billionaire investors known or strongly suspected of using Kelly methods include Warren Buffett, George Soros, and James Simons. That is some impressive company.
…
On the flip side, the most prominent critic of the Kelly Criterion was probably Paul Samuelson. A Nobel prizewinner in Economics, he wrote about the Kelly formula several times, the most amusing surely being his NSF-funded academic paper consisting entirely of one-syllable words. He was presumably trying to make it accessible to the less gifted; Prof. Samuelson was apparently a bit of an a**hole. My kind of guy.
Now, nobody likes to laugh at economists more than I do. And stodgy academics telling colorful billionaires how to invest certainly seems a ripe opportunity.
But this is really not fair. Those same academics would also tell a lottery winner he should never have bought a ticket, and they would be right. The details, and not the outcomes or personalities, are what matter.
We will ponder some of those details in the next installment.
I changed my mind; I want to stick with my toy example just a little bit longer.
Let’s change the game slightly. Instead of bringing your own bankroll, Casino Nemo gives you $1 with which to play. You can play as many rounds as you like, compounding your gains from round to round… For as many rounds as you can win in a row. And the first time you lose, you do not lose your wager; you get to keep it! But then the game is over and you do not get to play any more. So I guess the first time you lose is also the last time.
There is just one catch. You have to pay a one-time fee to play.
Question: How much should you be willing to pay to play this version of the game?
I will not bore you with the details, but the expectation value of this game is actually \(\infty\), assuming you go all-in on every bet (as you should). So you pay $1 million to play, lose on the fourth round, and take home $8. Nice work.
This little thought experiment is called the St. Petersburg paradox. Every article about the Kelly Criterion seems to mention it, although they really have very little to do with each other, in my opinion. But who am I to argue with tradition?
The first satisfactory solution was provided by Bernoulli in 1728, who made the fascinating observation that $100 to a broke man is worth more than $100 to a millionaire. In economist-speak, the utility of money is not linear. Using simple expectation value as your goal assumes that utility is linear, which gives rise to the paradox.
For expectation value to make sense as a goal, it has to be computed over a measurement of value that is linear. Such a measurement is called a utility function. Bernoulli decided that a logarithmic utility function was logical; i.e. that the value of any quantity of money equals its percentage of your net worth. So $100 to someone with net worth $1000 has exactly the same utility as $100,000 to someone with net worth $1 million. Equivalently, each digit you can tack on to your net worth has the same utility.
Note that defining “utility” like this is an assertion without proof. And not even really, you know, true. We will revisit this when we talk about Kelly skeptics.
Using such a logarithmic utility function, the St. Petersburg paradox vanishes because the expectation value is no longer infinite. Read the Wikipedia article for the gory details.
Returning now to a world where you place bets you might actually lose, what is the connection between all this and the Kelly Criterion?
In Kelly’s original paper, the goal he chose was to maximize the expected rate of return. That is, given some betting strategy that you apply for \(n\) rounds, what was your average percentage return per round? The strategy that maximizes the expected value of that per-round compound return, as \(n\) becomes large, is the Kelly strategy. Note that this is not only a property of the Kelly strategy; it is the original definition.
It turns out — since percentage return is basically a logarithm and compounding (multiplying) results is just adding logarithms — that this is equivalent to maximizing your expected utility on each round using a logarithmic utility function. In fact, the Wikipedia page for the Kelly Criterion “derives” the Kelly formula from this fact, without really explaining where it comes from or why.
Kelly pointed out in his paper that maximizing the expected logarithm of your bankroll per bet is a consequence of his goal to maximize the compound rate of return, and it has nothing to do with any particular concept of “utility”.1 But that has not stopped lots of people from confusing the two.
Given this defining property of the Kelly Criterion, it is perhaps not so surprising that several people who are famous for their ability to generate large annualized returns are also notable proponents of the Kelly Criterion.
We will meet one of them… next time.
1. Logarithmic utility has various implications in this context; for example, \(\log 0 = -\infty\). Losing one dollar is OK; losing your last dollar is very, very, very bad. Consequently, the Kelly formula will not permit any nonzero chance of losing all of your money. The formula only tells you to go all-in when \(p = 1\); i.e., it’s a sure bet. If you are in the habit of making such bets, you do not need Kelly or anyone else to tell you how to size them.
Let me continue with my example from Part 2. Yes, this example is a toy. But I believe that studying simple cases can help to understand complex ones.
To recap, we have a game where you place a bet that you will win with probability \(p = \frac{2}{3}\) and that pays off 1:1. You have a $1000 bankroll to play this game once per day for two days. You may compound (roll) any win/loss from the first day into the second.
We compared three betting strategies:
- Strategy A (“Rock”): Go all in, always
- Strategy Z (“Paper”): Bet nothing, always
- Strategy ZA (“Scissors”): Bet nothing on the first day and go all in on the second
Changing terminology slightly, let’s say that one strategy “beats” another if it is more likely to leave you with more money in a head-to-head comparison.
We saw last time that — for this two-day game — Paper beats Rock, and Scissors beats Paper, and Rock beats Scissors.
Consider one more strategy:
- Strategy K: Bet \(\frac{1}{3}\) of your current bankroll, always
This is the Kelly bet for this game. The math is simple. When the payoff is 1:1, the Kelly formula reduces to \(p-q\). For this game, \(p = \frac{2}{3}\) and thus \(q = 1-p = \frac{1}{3}\), so Kelly says to bet \(\frac{2}{3}-\frac{1}{3} = \frac{1}{3}\) of your bankroll.
(Note: This hypothetical game has positive expectation; that is, the payoff is more than sufficient to compensate for your chance of losing. If you study any actual casino game and plug its numbers into the Kelly formula, you will get a negative answer, which is Kelly’s way of telling you to take the other side of the bet.)
You can check for yourself that strategy K beats A, and it beats Z, but it loses to ZA. The latter is easy to see since ZA leaves you with $2000 six times out of nine, while the best Kelly can do is win twice leaving you with \($1000 * \frac{16}{9} = $1777\). I suppose this makes it “Dynamite”, blowing up Rock and Paper while having Scissors cut its fuse. And we will pretend I designed the example this way on purpose.
Now wait a minute… Did we just beat the Kelly Criterion?
Yes. Yes, we did. For the two-day version of this game.
But look at Strategy ZA and tell me how to extend it to three days. Or 10 days, 1000 days, 1 million days… You will find it becomes harder and harder to develop any strategy to beat Kelly’s simple “always bet \(\frac{1}{3}\)”. This includes adaptive approaches that change strategy based on your win/loss record.
I want to mention again that, in all cases, Strategy A (good old Rock) still has the highest expectation value. For example, if you come back every year for 100 years and play the 10-day game with Strategy A, you will probably win the $1 million once or twice, which is enough to outrun Kelly’s expected ~$2900 per year. You will still go bust the other 98 years, of course.
And if we extend the game to 100 days, and you stick with Strategy A, you have to come back for something like \(10^{17}\) years for a decent shot at seeing your astronomical payoff and pulling ahead.
I believe I have now beaten this example into the ground, and I am debating what direction to head. Tune in next time to find out.
Welcome to Casino Nemo! You will like it here.
We have this game where you place a bet and then we roll a fair six-sided die. If it lands 1 or 2, we keep your bet; if it lands 3 through 6, you get back your bet times two (1:1 payoff).
As I said, you will like it here.
Pretend that you have not read Part 1 and consider: How much should you bet?
The answer is… “It depends”.
Suppose you are visiting Nemoville (Nemoton? Nemostan?) for a ten day vacation, and we only let you play this game once per day. Suppose further that your spouse gives you a strict allowance of $100 each day for gambling. It is fairly clear, I think, that you should bet the entire $100 every day. You will probably win around 2/3 of the time, so you expect to finish the week with roughly \($200 * 10 * \frac{2}{3} = $1333\), and no other strategy has a higher expectation value. In fact, the more days you play, the better off you become relative to other strategies (both in total wealth and in likelihood) by betting your entire $100 allowance every day.
Call this strategy of always betting everything “Strategy A”.
Now, suppose when you return the following year, your spouse changes the rules and gives you a single $1000 allowance for the entire 10 days. And you are allowed to compound; i.e. roll your winnings/losses forward from one day to the next.
If you follow Strategy A and bet your entire bankroll every day for 10 days, there is a \(1-(\frac{2}{3})^{10}\) = 98.3% chance you will lose one of the 10 bets and thus all of your money. You do have a chance of winning all 10 bets and $1.024 million, but that is only 1.7%. If we extend this game to 20 or 30 days, your chances of winning every bet become vanishingly small very quickly.
Note that the payoff for Strategy A, if you do manage to win and compound over many days, becomes ludicrously huge; so huge that this strategy still has a higher expectation value than any other. Yet if you play it long enough — and probably not even very long — you will definitely lose everything.
So… Perhaps maximizing expected payoff is not the best goal. But then what is?
Maybe we can simplify the problem. Let’s reduce your vacation to just two days. You have your $1000 allowance, and you get to roll your win/loss from Day 1 into Day 2.
Four things can happen:
- You win both days (4 chances in 9)
- You win on the first day but lose on the second (2 chances in 9)
- You lose on the first day but win on the second (also 2 chances in 9)
- You lose both days (1 chance in 9)
Now, Strategy A (bet it all both days) will leave you with $4000 in case (1) and $0 in the other cases, for an expected value of \($4000 * \frac{4}{9} = $1778\). And this is the highest expectation of any strategy.
On the other hand, Strategy A leaves you with nothing more than half the time. So maybe you should try something else?
Define “Strategy Z” as: Bet zero, always.
We could say one strategy is “better” than another if it is more likely to win head-to-head. Like say you are on vacation with the neighbors, and your spouse does not care how much money you win or lose, as long as you wind up with more than the neighbors.
By this definition, how does Strategy Z compare to Strategy A? Well, A beats Z 4 times out of 9 via case (1), but loses 5 times out of 9. So, by this definition, Z is better. (Sometimes the only way to win is not to play.)
We can toy with other ideas. Consider “Strategy ZA”: Bet zero on the first day and everything on the second.
Let’s compare this to Strategy Z. In case (1), ZA wins by leaving you with $2000 versus Z’s $1000. Similarly for case (3). ZA does lose to Z in cases (2) and (4), but those only combine to 3 chances out of 9. So Strategy ZA beats strategy Z 6 times out of 9 and is therefore “better”.
To recap: By this definition of “better”, Z is better than A, and ZA is better than Z.
So it must follow that ZA is better than A, right? Let’s check.
Case (1) – A wins. Case (2) – tie. Case (3) – ZA wins. Case (4) – tie. (Verify these for yourself). But Case (1) has 4 chances in 9, while Case (3) only has 2 in 9. Therefore, A is actually better than ZA.
All of which is a long way of saying that this notion of “better” is not an ordering, which means “better” is a pretty bad name for it (see related video). We just got ourselves into a rock/paper/scissors game with the neighbor. I hate it when that happens.
I stumbled across this example while trying to reason my own way toward the Kelly Formula. It turns out this does not work, and Kelly-type arguments have little or nothing to say about examples like this. To arrive at Kelly, we have to simplify our example not by reducing the rounds to two, but by increasing them to infinity. Once we do that, an analogous definition of “better” actually does produce an ordering on betting strategies; and under that ordering, Kelly is “better” than anything else in the long run.
But the whole framework kind of breaks down for finite cases, which is one reason those Nobel laureates were non-fans of the Kelly Criterion. Another is whether beating the neighbor is actually the right goal.
More next time.
Last week, on PredictIt, the “Yes” contract for Amy Barrett becoming Trump’s Supreme Court nominee was trading at an implied probability of 40%. Based on my own reasoning, I estimated her chances at closer to 20%. Put another way, the “No” contract was offered for $0.60, while I thought it was worth $0.80. So I decided to place a bet.
Question: How much should I bet?
I have learned that this is a surprisingly interesting question, one that once inspired Nobel laureates and billionaire investors to publish multiple academic papers calling each other morons.
Let me start with the answer. Well, the answer according to some. I found most expressions of this formula hard to remember, so I will (a) put it here up front where I can find it and (b) cast it in a simple form.
Define:
\[
\begin{align*}
p &= \textrm{your (estimated) probability of winning} \\
q &= \textrm{the opposite} = 1 – p \\
p’ &= \textrm{the market price (imputed probability)} \\
q’ &= \textrm{the opposite} = 1 – p’
\end{align*}
\]
Write down \(p-q\) and \(\frac{p’}{q’}\) next to each other without any parentheses:
\[p-q\frac{p’}{q’}\]
This is the fraction of your bankroll you should bet. Note that \(\frac{p’}{q’}\) is just the payoff on a winning bet, as in 1:1, 2:1, 10:1, or whatever. (Well, the reciprocal of the payoff.) This version of the formula directly applies to markets where winning contracts pay $1, like PredictIt.
So, for my example, \(p = 0.8\), \(q = 0.2\), \(p’ = 0.6\), \(q’ = 0.4\), and I should have bet \(0.8 – 0.2(\frac{0.6}{0.4}) = 0.8 – 0.3 = 0.5\), or half my bankroll.
This formula is called the Kelly Formula or Kelly Criterion. Describing where it comes from, some of its properties, and maybe a bit of its amusing history is the subject of this series. Which I might actually finish for a change.
The first gadget I purchased was this bad boy, the Dylos DC1100 Pro:
If I could have just one device for measuring air quality, this… Well, this would not be it. It does not measure CO2. It does not measure humidity. It does not measure temperature. It does not have a Web server or a wireless card or indeed any connectivity whatsoever. It does not have a battery. It does not even measure the same thing government agencies use for their air quality indices.
What it does do is provide a high-quality laser particle count. And hey, do you really need a Web server to count particles?
Two things sold me.
First, this 2007 discussion on hvac-talk.com. (Did you know there is an “hvac-talk.com”? Because of course there is.)
The discussion goes like this:
Person A: “Anyone know anything about this new inexpensive particle counter?”
Person B: (standing up to hide butt crack) “Ya get what ya pay for is all I’m sayin'”
Person C: “Hi, I am the engineer who designed the DC1100…” (proceeds to tear Person B a second butt crack)
That was when I placed my order. OK, so technically just one thing sold me.
Some brief history. Prior to 2007, a decent particle counter cost thousands of dollars, and a cleanroom-quality counter ran $10K or more. Then this little company came out with this device, and various labs started comparing it to their research-grade equipment, and found… Hey, it works pretty well for a sub-$300 gadget. Many amateurs use the Dylos as their “trusty” golden reference.
I became further sold while browsing the Dylos site. For example:
I feel like I know these guys… They are old-school ninja EEs. If you ever met a real monster Electrical Engineer, you know what I am talking about. Give one a soldering iron1 and some coffee, come back later, and you are guaranteed to see something amazing. Just don’t touch it.
I wanted to make mine portable, so I bought an XTPower MP-10000 external battery pack. Works great.
If you want to pull samples from the device, for a modest extra charge Dylos will provide an RS-232 serial output. If you do not know what that is, or even if you do, I do not recommend it, because there are other devices you should buy in addition. All right, all right, “instead”. A topic for a later installment.
The sole difference between the Pro and non-Pro versions is that the former is calibrated to see particles down to 0.5 microns, while the latter “only” sees down to 1 micron.
I will close by mentioning this device’s relevant limitations.
- It sees water vapor as particles, so the measurements vary based on humidity.
- It only provides a count of particles, while all of the “standard” air quality metrics are based on particle mass, not particle count. This is not as bad as it sounds for two reasons. First, the use of particle mass was an arbitrary choice based on research in the 1950s; more recent research suggests some negative effects are better correlated with particle count anyway. Second, if your ensemble of pollution sources is fairly stable, particle masses and counts are well-correlated to each other.
- It does not “see” ultrafine particles. But neither does anything else at any sane price point, for now.
Bottom line: While this is not the only device I would want to own, I am glad to have it.
Next time: PM2.5 etc.
1. but not, heaven forbid, a keyboard
|
|