About

Title: Algorithms to Live By Authors: Brian Christian, Tom Griffiths Category:supplementals Number of Highlights: 25 Date: 2025-06-10 Last Highlighted: **


Highlights

Any yardstick that provides full information on where an applicant stands relative to the population at large will change the solution from the Look-Then-Leap Rule to the Threshold Rule and will dramatically boost your chances of finding the single best applicant in the group.

Tags:algorithm,information


Optimal stopping tells us when to look and when to leap. The explore/exploit tradeoff tells us how to find the balance between trying new things and enjoying our favorites. Sorting theory tells us how (and whether) to arrange our offices. Caching theory tells us how to fill our closets. Scheduling theory tells us how to fill our time.

Tags:algorithm


When balancing favorite experiences and new ones, nothing matters as much as the interval over which we plan to enjoy them.

Tags:experience


The success of Upper Confidence Bound algorithms offers a formal justification for the benefit of the doubt. Following the advice of these algorithms, you should be excited to meet new people and try new things—to assume the best about them, in the absence of evidence to the contrary. In the long run, optimism is the best prevention for regret.

Tags:algorithm,optimism


Sorting something that you will never search is a complete waste; searching something you never sorted is merely inefficient.

Tags:algorithm,efficiency,expense,favorite


In any optimal stopping problem, the crucial dilemma is not which option to pick, but how many options to even consider.

Tags:algorithm


This is the first and most fundamental insight of sorting theory. Scale hurts.

Tags:algorithm,technology


The moral is that you should try to stay on a single task as long as possible without decreasing your responsiveness below the minimum acceptable limit. Decide how responsive you need to be—and then, if you want to get things done, be no more responsive than that.


The lesson is this: it is indeed true that including more factors in a model will always, by definition, make it a better fit for the data we have already. But a better fit for the available data does not necessarily mean a better prediction.

Tags:algorithm,perception


If you want to be a good intuitive Bayesian—if you want to naturally make good predictions, without having to think about what kind of prediction rule is appropriate—you need to protect your priors. Counterintuitively, that might mean turning off the news.

Tags:perception,prediction


Priority inheritance. If a low-priority task is found to be blocking a high-priority resource, well, then all of a sudden that low-priority task should momentarily become the highest-priority thing on the system, “inheriting” the priority of the thing it’s blocking.

Tags:priorities,work


Something normally distributed that’s gone on seemingly too long is bound to end shortly; but the longer something in a power-law distribution has gone on, the longer you can expect it to keep going.


“To try and fail is at least to learn; to fail to try is to suffer the inestimable loss of what might have been.”

Tags:failing,learning


Minimizing the sum of completion times leads to a very simple optimal algorithm called Shortest Processing Time: always do the quickest task you can.


Intuitively, we think that rational decision-making means exhaustively enumerating our options, weighing each one carefully, and then selecting the best. But in practice, when the clock—or the ticker—is ticking, few aspects of decision-making (or of thinking more generally) are as important as one: when to stop.

Tags:algorithm,cognition


In general, it seems that people tend to over-explore—to favor the new disproportionately over the best.


This has emerged as one of the major insights of traditional game theory: the equilibrium for a set of players, all acting rationally in their own interest, may not be the outcome that is actually best for those players.

Tags:game_theory


In fact, for any possible drawing of w winning tickets in n attempts, the expectation is simply the number of wins plus one, divided by the number of attempts plus two: (w+1)⁄(n+2).

Tags:algorithm


Full information means that we don’t need to look before we leap. We can instead use the Threshold Rule, where we immediately accept an applicant if she is above a certain percentile. We don’t need to look at an initial group of candidates to set this threshold—but we do, however, need to be keenly aware of how much looking remains available.

Tags:algorithm,decisions,information


Simply put, exploration is gathering information, and exploitation is using the information you have to get a known good result.

Tags:information


Part of what makes real-time scheduling so complex and interesting is that it is fundamentally a negotiation between two principles that aren’t fully compatible. These two principles are called responsiveness and throughput: how quickly you can respond to things, and how much you can get done overall.


Fundamentally, overfitting is a kind of idolatry of data, a consequence of focusing on what we’ve been able to measure rather than what matters.

Tags:algorithm,data,statistics


The chance of ending up with the single best applicant in this full-information version of the secretary problem comes to 58%—still far from a guarantee, but considerably better than the 37% success rate offered by the 37% Rule in the no-information game.

Tags:algorithm


Much as we bemoan the daily rat race, the fact that it’s a race rather than a fight is a key part of what sets us apart from the monkeys, the chickens—and, for that matter, the rats.

Tags:perspective,work


Instead, tackling real-world tasks requires being comfortable with chance, trading off time with accuracy, and using approximations.

Tags:algorithm,problem_solving