Skip to content

When to think less

In such a complex world it is easy to overthink. Corin Hadley explores why thinking simpler often gives better answers.

after Algorithms To Live By (Brian Christian and Tom Griffiths, 2016)

By Corin Hadley, SciTech Editor

Each day we make an uncountable number of decisions, from the seemingly innocuous to the big fundamental stuff. How do we treat the people around us? What do we spend our time doing? What do we make (or throw in the microwave) for dinner?

The bigger the decision, the more we might feel like we need to mull it over. It’s easy to get sucked in, though, and sometimes thinking about it becomes overthinking about it. But should we even be trying to 'consider it carefully'?

When we make decisions, we are really trying to predict the future. In weighing up your options, you are trying to imagine what life will be like after the event: will breaking up with them make you less stressed? Will spending Christmas with your dad infuriate your mum?

While computer science might not hold the answers to those exact questions, a lot of thought has gone into how best to make predictions. From marketing to astrophysics, developing computer models that can predict the future is big business. While humans do this kind of modeling and prediction automatically, when you're trying to tell a computer how to do it you’re forced to consider what the truly optimal method is.

A model is a set of patterns which describe how some chosen variables have historically responded to change. This could be how customer satisfaction varies with sugar content of a sweet, or how your happiness varied with who you spent time with. By extrapolating these patterns (assuming existing trends will continue into new data), you can make predictions, and thus make decisions about what to do.

If you could create a model that factored in every single variable perfectly, you could make 100 percent accurate predictions. Of course, in the real world, there are always more variables than it is possible to consider. You can never think of everything.

when we start to make predictions from real data, considering too many factors leads to a problem

It seems to figure that if we want the best model we can make, we might want to consider as many of the contributing factors as possible. However, it turns out that when we start to make predictions from real data, considering too many factors leads to a problem known in computer science as overfitting.

As you increase the complexity of a model, the prediction it makes relies more and more sensitively on starting conditions. This means small amounts of incompleteness or error in your input information will drastically skew your prediction.

Not everyone’s cup of tea, but graphs are probably the best way to show this effect:

You can see the more factors of x (x1, x2, x3...) are included the better the model fits the known data, but the extrapolated prediction starts to vary wildly.

But quite suddenly we're talking about factors of x and everyone (myself included) is a bit confused. I'll try to link this back to a real situation, like they do in The Big Short (minus half-naked Margot Robbie.)

Say you're trying to decide whether a new friend actually likes you. Every time you see them you both have a great time, and they make you feel needed and valued. That’s the overall trend you’ve seen, considering a couple of big factors (how you feel with them, what they tell you.) Overfitting describes convincing yourself they hate you based on some irrelevant variable (number of times they didn’t like the food you’d made (=1) or times they’ve cancelled (=2)).

So, when you’re working in a system with incomplete data - e.g. life - a large, complex model that accounts more completely for the information you have is likely to produce wildly inaccurate predictions.

How do you prevent this ‘overfitting’? How do you know which variables to include in your model? In the computer modelling industry much thought and specific terminology has been given to the processes us humans already use to avoid overfitting:

Regularisation describes applying a downward pressure on the complexity of your model. In a computer, this might be a points system where the computer gets a rating for each model it produces, winning points for accuracy, efficiency, and, importantly, simplicity.

In humans we have inbuilt systems that push us towards easy solutions, like getting tired and our limited memory. Not being able to keep every aspect of a problem in your head or getting too tired to think about it anymore are not failures, they're hard-wired systems that prevent overfitting.

This is why talking to someone about something that's on your mind can help: you're forced to strip the whole thing back to just what's important

Talking your problems through with someone else forces you to simplify your model even further to suit their ability to listen and your ability to explain. This is why talking to someone about something that's on your mind can help: you're forced to strip the whole thing back to just what's important. Would you mention that random time they cancelled?

You can also use the correlation between the time you have to develop your model and its complexity to prevent overfitting. Known by the intuitive name ‘early stopping’ in computer science, setting a hard limit on how much you can add to the model stops it ever getting too complex. Perhaps this takes the form of a pros and cons post-it-note, but you’ve only got that one, no more. Maybe you just have to choose by Wednesday.

When you looked at the set of data above which did you see first, the overall rising trend or the subtle undulations from one point to the next? By the same logic if you want to make a decision, the answer might be the first thing that popped to mind. Again, the people you talk to are far more likely to listen to the first few things you say. If, at the end of a 2 hour rant, your listener summarises it by just repeating back the first thing you said, it's not them being flippant, they're helping you avoid overfitting.

Making decisions is not a choice between carefully considered rationality and gut instinct. First instinct is often the rational solution. Additionally, the more complex and uncertain the situation the more rational the first instinct is.  

Paint in broad strokes. Trust your instincts. It’ll not just be okay, but better.

All featured images: Corin Hadley/ Procreate


Latest