The Perils of Gift Giving

The gifting season is upon us.

Like several things that require you to interact with others, gift giving is an apparently simple, benign activity that can quickly turn into a can of worms. The main problem is that the giver and the recipient have different perspectives and they often follow different incentives, which sometimes end up clashing.

As a giver, for example, you tend to think that more is better: two presents are a more generous offering than one. But there’s a twist. Imagine you’re buying a $300 sweater for a friend. At the counter, you add in a $25 gift card, so that he can get himself a necktie or a few socks. Which one is the best gift, just the sweater or the sweater and gift card combo? In this case, one is better than two, because as a recipient you tend to average out the value of all the gifts you receive from someone: the small gift is thus lowering the perceived value of the big gift, and you are better off with just the sweater.
You should refrain from the temptation of adding candy, gift cards or novelty items to your main present, whatever its value.

We have a peculiar way of evaluating items in a group. Take this example from Nobel laureate Daniel Kahneman. Suppose there are two sets of dinnerware that you can buy:

Set A has 24 pieces.
Set B has 40 pieces. It has all the pieces from Set A plus an additional 16 pieces, but 9 of these are broken. 

Which set is worth more? Obviously Set B, since it has everything from Set A plus an additional 7 intact pieces. And indeed, in the study this refers to, participants valued Set B more ($32) than Set A ($30), when presented with both options.
But in single evaluation, when sets were presented by themselves, results were reversed: Set A was valued far better ($33) than the larger Set B ($23). Why? Because the average value per dish in Set B is much lower due to the broken dishes, which nobody wants to pay for. Calculating the average makes the set with less pieces worth more, and taking out items from the larger set improves its value. Economic theory dictates the exact opposite: «adding a positively valued item to a set can only increase its value».
But not so in behavioral economics.

So yeah, less is more and all that. Surely, you might say, reducing the act of gift giving to just a matter of value is simplistic, not to mention heartless. But there’s a little economist inside our heads who strongly disagrees: that is why some people only give out kitschy, sarcastic gifts that have no intrinsic value beyond the chuckle they produce once the wrapping is torn off. It’s an efficient way of avoiding the problem of a sincere gift, and it works well with acquaintances or your sister, but it’s not an ideal option for everyone else in between.

In purely economical terms, the best gift is cash. I recommend that if you happen to have a friend who’s an economist, because then the gift will double its function and also get you a chuckle. A classic study called The Deadweight Loss of Christmas explains why: in a survey, it was found that «holiday gift-giving destroys between 10 percent and a third of the value of gifts». In other words, if you give your friend a $100 shirt, his perceived value of it, or how much he’d be willing to pay for it, will be 70 to 90 dollars. That’s the deadweight loss, which would be avoided to the benefit of everyone involved by just handing out cold, hard cash. The world would be a gloomy place, but it would save around $10 billion that go lost each year in the very transaction of gift giving.

When buying stuff, you also have to deal with another problem: choice. The average supermarket carries about 40,000 items, twice as many as just a decade ago. You can only marvel at the number of different types of jeans you can find in a Gap store now compared to just twenty years ago. More choice is good, right? Psychologist Barry Schwarz has a different opinion, as he explains in his book The Paradox of Choice.
In a memorable study conducted at a gourmet food store, jams were offered for tasting. On one day, 24 flavors were available; on another day, just 6. Surprisingly, 30 percent of buyers who tasted from the small selection made a purchase, compared to just 3 percent of those who tasted from the wider array. Apparently, more choice creates «decision paralysis» and a lower level of satisfaction, because it leaves room for greater regret: you have an increased chance of making the wrong decision as the number of options escalates.

(While not everyone agrees with this theory, companies seem to have picked up on it: when Procter & Gamble reduced the number of variations of its Head and Shoulders shampoo from 26 to 15, sales increased by 10 percent. And when the Golden Cat Corporation eliminated the 10 worst selling types of its kitty litters, sales rose by 12 percent and profits more than doubled due to reduced distribution costs).

If that wasn’t enough, be aware that simply buying groceries can mean relinquishing your most personal secrets. Large retail stores are ravenous for information about your shopping habits, and they can use the data they gather from fidelity programs to predict your future needs. Charles Duhigg narrates in his book The Power of Habit of an incident that happened at a Target store in Minnesota, when a man walked in protesting the fact that his daughter, still in high school, was getting coupons in the mail for baby cribs and clothes: «Are you encouraging teenage pregnancy?», he complained. When he received an apology call from customer service a few days later, he had to apologize himself:
«I had a talk with my daughter. It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August». If you think that’s too much, let it be known that credit card companies can predict a divorce just by analyzing spending patterns.

But back to presents: there’s one final piece of advice that I can give you. If you want to extract the maximum amount of happiness from your purchases, wether it’s something you buy for yourself or someone else, buy experiences instead of things. The excitement of a brand new object soon fades, while a new experience (a weekend somewhere, a yoga lesson, a concert) gives you a memory than can be revisited and stays with you forever.

Advertisements

The 11 Most Annoying Things on Facebook

There are a number of things on Facebook that just rub me the wrong way. Here’s a list.

1. The Plea to stop unwanted requests.
Let me tell you how this works: those requests are automatic. When you play a game, it will generally try to spam your entire friends list so that it can propagate itself. There’s a simple way around this: go to your privacy settings and block the offending app. To think that someone will read your plea, make a note of it, and specifically opt you out the next time actually makes you sound more stupid than the very person you’ve been getting unsolicited requests from. But it does make you look like you are above these silly games, because hey, you’re on Facebook for the important, serious stuff. And it’ll also get you likes from other clueless recipients.

2. The Redundant Link
When you post a YouTube video, you first insert the URL in the box, and then Facebook creates a preview of that. That’s when you should remove the original YouTube link you just pasted, and either add your comment or leave it blank. I see so-called social media experts routinely making this mistake. This applies to any link type. It just looks ugly.

3. The Autolike
That’s just not allowed.

4. Vaguebooking
Urban Dictionary defines this as «An intentionally vague Facebook status update, that prompts friends to ask what’s going on, or is possibly a cry for help». You shouldn’t do it.

5. The Countdown

This is a variation on vaguebooking. It’s a very cheesy way of fishing for attention.

6. Addressing people who just gave you a like
This is the Facebook equivalent of calling someone back after they texted you. It’s intrusive. Plus it makes you look really dumb once the second like comes in, because then no one will know who the hell you’re talking to. And it slashes the chances of that person giving you a like in the future, fearing you’ll take the opportunity to start an unwanted conversation again.

7. Putting up fake cool places as your hometown

Judging by Facebook, New York City has a population of approximately 60 million.

8. Your newborn as a profile pic

There’s just no stopping people from forcing down your throat the fact that they can reproduce. Yet it’s probably less creepy than creating a whole new profile for the infant altogether.

9. The Reflex Selfie

This was cool for about 5 seconds the first time someone ever did it. Now it’s incredibly lame: if you’re a photographer, that’s a pretty desperate way of advertising that. If you’re not, then it’s probably a self-involved way of showing you’re artistic. What it really shows is that you own a reflex camera and you want to hide your face.

10. The shared profile
That’s so annoying even Facebook doesn’t like it. So they have launched a new ‘couples’ page, presumably to mitigate the problem. I don’t really know what that is, but you can look it up here.

11. The Feminist Support Group

Here’s how this works: some perfectly normal, average looking woman (possibly past her prime age-wise) posts a vaguely sexy looking picture of herself, often a selfie. At which point a good portion of her female friends feel obliged to comment on how gorgeous she looks. Often by using typically masculine remarks. The unwritten rule is that you should then do the same as the occasion arises. It’s a cheap way of distributing some hollow, feel-good compliments. You can tell when this dynamic is in effect because not a single guy either likes or comments.

Do you have any Facebook pet peeves?

Do You Have to be Mad to be a Scientist?

Great Scott!

“Doc” Brown, from Back to the Future, is peculiar among fictional scientists, because he’s not a villain. A survey of about about 1,000 horror films released between 1930 and 1980 reveals that in about a third of the movies, the bad guy is a mad scientist. And while scientific research produces about 40 percent of the threats, scientists are heroes in just one every ten films. But even though Doc is an outlier in intent, he still looks the part: his appearance is modeled after the most famous scientist of all time.

That’s wonderful, right? The greatest genius of them all showing you his quirky side. Nearly everyone will be able to tell you that this is Albert Einstein: good luck having people recognize any other scientist from a photograph. That’s because Einstein is obviously very famous, but also because this photograph conforms beautifully to the stereotype of the mad scientist. This other picture of him is not quite as popular:

But this is the guy you want! He’s the one who came up with the theory of Special Relativity and discovered the photoelectric effect, for which he got a Nobel Prize: both accomplishments came in 1905, when Einstein was 26 years old, working at the Swiss Patent Office in Bern, Switzerland. Take a look at good old Charles Darwin, here:

He’s 65 in this iconic photograph, looking like an old sage. Which is probably what you expect him to look like, because we’re somehow primed to associate science with long, white beards. When he boarded the HMS Beagle and started a voyage that would take him around the world and inspire the theory of evolution, he looked more like this:

He was just 22. And he had already become a celebrity in scientific circles by 1836, at the age of 25. So much for the old man that looks like Gandalf from The Lord of the Rings.

James Maxwell, probably the greatest physicist of all time after Newton and Einstein (who kept a photograph of him in his study), wrote an essay about the nature of Saturn’s rings in 1859, aged 28, which remained our best understanding of the problem until the Voyager flybys in the 1980s. He produced his seminal contributions to electromagnetism before he turned 30. Edison and Tesla laid the foundations for their War of Currents in their early 30s. And beloved physicist Richard Feynman developed his Feynman Diagrams, which he would use to formulate the theories that won him a Nobel Prize, in his late 20s.

You get the gist of it. Great science comes from young people. But we’re stuck with this ridiculous stereotype of a hoary old man with goggles and smoking flasks. The scientific community is well aware of the problem. Nobel laureate Harry Kroto goes as far as calling the iconic old Einstein “an imposter”, in a brilliant presentation during which he raises this very point. A group of researchers even published a paper, called Breaking down the stereotypes of science by recruiting young scientists, to suggest that the stereotype should be fought by engaging kids in science at an early age.
They write, «If you ask the average ten year old in America what a scientist looks like, they almost always describe an older man with crazy white hair and a lab coat. Students are often repeatedly confronted with stereotypes of science and scientists via television, cartoon, and comic book characters as well as uninformed adults or peers».

Up until 1905, over 60 percent of Nobel laureates had completed their prize-winning work before turning 40, and about 20 percent did it before 30. But by 2000, things had changed: less than 20 percent of winners in physics were rewarded for research concluded before they were 40, and in chemistry the percentage dropped to nearly zero. There are of course many factors at work here, including the fact that it now takes longer to complete your academic training compared to a century ago. But it doesn’t help that the young are forced to perceive science as something that must be in the hands of the old (and crazy).

In 2005, an Australian physician named Barry Marshall won the Nobel Prize for medicine: he discovered that ulcers, forever thought to be the work of stress, food, and acid, were actually caused by bacteria, so they could easily be cured with antibiotics. But when he first proposed the idea in 1982, at the age of 31, he was a young doctor from Perth (not the scientific center of the world by any means) trying to overturn a long-standing principle of medical doctrine: he was ridiculed and no scientific journal accepted to publish his study. So he had to ingest the bacteria himself to prove that he was right.

Stereotypes are very sticky, and this one seems to work particularly well. Einstein used to say, «A person who has not made his great contribution to science before the age of 30 will never do so». While this may be debatable today, it is essential to engage young people earlier on and get rid of this mad scientist crap. Even at the cost of no longer being allowed to say: «Great Scott!».

The Infinite Blades Hypothesis

How many blades does your razor have?

If you’re a customer of one of the two leading brands and you’re on their latest products, it’s likely to be either four or five. Gillette and Schick (known as Wilkinson Sword outside the US) have been waging a razor war for decades, trying to take hold of a global industry worth about $25 billion a year. Gillette’s latest, the Fusion, has five blades; Schick’s Quattro has four, but the company also offers an upgraded five-blade version of a previous model, the Hydro (marketing a five-blade Quattro would make little sense, because quattro means four in Italian and their ads are built around a hand showing four fingers).

It’s a grueling fight, over which both companies spend billions each year in marketing and research. Just manufacturing the tools to build a specific razor may cost upwards of $250 million. It’s gone through the courts as well: Gillette sued Schick in 2003, claiming the Quattro was infringing one of their patents, but they lost. Nevertheless, Gillette still holds the lion’s share of the market.

Razors are prominent members of a group of products known as loss leaders. You might have noticed that the razor handle, which normaly includes two blade cartridges, is very conveniently priced. It’s basically free, to the point that the product has to be priced in such a way that it doesn’t become cheaper to buy razors handles just for the blades. This is designed to conquer you as a customer for the lucrative replacement cartridges, which are so expensive that they consistently rate among the world’s most shoplifted products, chiefly because of their value-to-size ratio. In other words, Gillette and Schick are willing to take a loss on the initial sale so they can reap the benefits from you later on.

But how many blades do you really need?
Gillette introduced the “safety razor”, so called because the blade is encased is such a way that it’s harder to cut yourself, back in 1904. It took them 67 years to add a second blade, in 1971. They then launched the Mach 3 in 1998 and the Fusion, with five blades, in 2006 (they skipped the four-blade generation because of patents). According to a tipically reliable Internet source, this creates a hyperbolic curve that will give us a razor with infinite blades sometime around 2026, but don’t spend too much time thinking about it. It’s interesting to note that in the early years, Gillette didn’t pursue a loss leading strategy: in fact, the razor was quite expensive. But King C. Gillette didn’t care because he had patents to protect his invention, so no one could sell cheap knockoff blades for his razor. Moreover, the blades themselves were made of carbon steel, so they would rust quickly: people were just forced to buy them frequently. Only in 1965, and after Schick introduced its own stainless steel blades, did Gillette finally make the switch, even though they had long held a patent on non-rusting blades.

But although Gillette has funded a number of studies that supposedly confirm the benefits of multiple blades, wether they actually produce a better shave is a matter of opinion. Two blades are good because the first raises hairs and the second cuts them. Additional blades could potentially just give you more nicks and ingrown hairs, depending on your shaving technique. Nevertheless, a six-blade razor is already on the market:

Not happy with just having six blades, the folks that sell this even put a shaving cream dispenser in the handle, making it just about the most ridiculous grooming item you can buy.

It gets trickier. You might have noticed that both Gillette and Schick also sell “power” versions of their razors, that use one AAA battery. It powers a tiny motor, similar to those used in phones for vibration, which supposedly facilitates the shave. The battery is included: Gillette gives you a Duracell, while Schick gives you an Energizer, the two top-selling brands in consumer batteries. Coincidence? No. Gillette bought Duracell in 1996 (they are now both owned by manufacturing giant Procter & Gamble), and Schick was bought by Energizer in 2003. When this happened, Gillette pre-emptively struck by launching a battery powered version of its Mach 3 razor, in late 2003. Schick quickly responded with a power version of its Quattro model. I am really not sure wether a vibrating razor is beneficial to your face, and Gillette has even been convicted for false advertising over this, but it’s interesting to note that both makers are selling you their own batteries, hoping you’ll buy more in the future (of course, while you’re forced to buy the correct blades for your razor, any battery will work, so brand loyalty is somewhat diminished here). If it all sounds exploitative, it’s because it is.

Another glorious field of application for loss leading is printers. Have you ever complained about how expensive ink cartridges and toners are? The reason they sound so expensive is that they cost a significant portion of the price of the printers themselves, which are sold at a loss. But if you buy the printer you’ll be committed to buying consumables, so manufacturers are willing to give you a bargain on the hardware just to rake you in. So, next time you’re out shopping for an ink cartridge or a toner, and you find it costs half your printer, at least you’ll know that the right question is not Why does the toner cost so much?, but rather Why was the printer so cheap?

The Snooze Dilemma

Waking up is hard to do.

So, to snooze or not to snooze? Well, it turns out that snoozing, like many enjoyable things in life, is critically bad for you. And you shouldn’t do it. Here’s why.

First of all, waking up is hard because your body goes through a series of changes. While you sleep, temperature, heart rate and blood pressure all decrease, and you get high on serotonin, a feel-good neurotransmitter that explains why your bed feels so much cozier in the morning than at night. If you align yourself properly with your circadian rhythm, by waking up at roughly the same time every day, your body knows. And in the hour before alarm time, it starts to drag you out of that pit by warming up your metabolism. This is an ideal situation, and explains why you sometimes open your eyes just minutes before your designated wake up time. If, nevertheless, you’re still sleepy and hit the snooze button, this gets in the way of that natural reboot process, creating a chemical imbalance in your body, which is now pumping dopamine, the antagonist of serotonin. The end result is a befuddled mess. On the other hand, if you’re not getting enough sleep in the first place and you’re off your natural rhythm, snoozing might become irresistible. But in this case, you risk falling back into deep sleep, only to be ripped out of it nine minutes later. That works against every natural process evolution has devised to ease you out of sleep, and wreaks havoc with your metabolism. Also, it generally prompts you to just snooze again. And again.

In other words, snooze time is never good. Unfortunately, when you need to make that assessment you’re a groggy half-human who’d kill for sleep. But snoozing is not always a snap judgment: some people construct elaborate snooze routines with multiple alarms that start up to an hour before their actual wake up time, thinking that’s the only way they can make it out of bed. Instead, they just subject themselves to an hour of useless, fragmented sleep that does nothing to soothe their bodies.

But wait, why is snooze time traditionally fixed at exactly nine minutes? Apparently it has to do with standardized gears inside alarm clocks in the 1950s: the snooze cog had to fit with existing ones and it could be set at either 9 or 10 minutes. The choice fell on 9, because 10 minutes was thought to be enough to “fall back into deep sleep”.
Another explanation that I like better has to do with cheap electronic components: with a 9 minute snooze, a digital alarm clock only has to “watch” the last digit to know when to go off again. This allows for simpler circuitry to be devoted to the function, and ultimately makes the clock cheaper to make.

Resisting the temptation to snooze is not easy. It’s an interesting problem because it creates a conflict between your present self (“I want to wake up on time tomorrow”) and your future self (“I want to sleep right now”), a staple of behavioral economics. So, alarm clock manufacturers have learned about this and sell an array of devices that nudge you into waking up. The Clocky alarm, for example, lets you snooze once, and then literally comes to life, jumps off your nightstand, and finds a place to hide, all the while blasting an ear-ripping alarm sound. You’re then forced to go find it and switch it off.
The Puzzle alarm is even more taxing on your fragile, unstable cognitive functions: the moment it goes off, it explodes a jigsaw puzzle and won’t stop until you have correctly solved it. But honestly, I don’t think anyone actually wants to incorporate a ridiculous-looking, self-hiding alarm into their lifestyle: a week into using it your rational, present self will just go ‘what the hell’ and give up. By then you’ll either have learned the lesson or gone back to snoozing.

Still, the best anti-snooze alarm of all is, hands down, the SnuzNLuz. It gets you on your toes by making donations to political causes you hate, every time you hit the button.

Alas, it doesn’t really exist. It has a product page at Thinkgeek.com, but it’s nothing more than an April Fool’s prank. But, ThinkGeek has turned joke products into reality before, so you never know.

Perhaps the SnuzNLuz has taken a cue from Stickk, a website that encourages you to commit to a goal by setting up a financial stake. No wonder, it was founded by a group of Yale economists and it capitalizes on the fact that we are all instinctively loss averse.
If you want to commit to going the gym regularly, for example, you can set up a weekly attendance goal and create a contract; whenever you fail to report in, Stickk will send some of your money to an anti-charity of your choice (options include the NRA, the Pro-Choice Foundation, and the Manchester United Fan Club).

So, what should you do? At the risk of sounding obnoxious, you should really try to get enough sleep in the first place: chronic sleep deprivation is one of the worst things you can do to your body, as it impairs your cognitive functions, your memory and your learning abilities. And you should never snooze anyway, not even when you’d sell your soul for five more minutes. How? By understanding that under no circumstances, and in absolutely no way, snoozing is going to make your day any better. Yes, you’ll get that brief, blissful feeling of being wrapped into the sheets again, but you’ll pay the price. We’re not good at resisting temptation, even when we know that doing so will pay off, but it’s never too late too learn. People who can delay gratification do better in life.
You might just start by learning not to snooze.

The Death of Skeuomorphism

This is the Calendar app you find on iPads and recent Macs. 

It comes complete with fake leather and torn bits of paper, resembling the real object it’s supposed to replace. This is an example of skeuomorphism, an approach to design that recreates functional elements in an ornamental way. It’s used in physical objects as well: your car might have fake, retro-looking hub caps on its rims, and the rivets on your jeans are most likely just fakes covering the real, functional rivets underneath. But Apple has made it famous by incorporating it in its graphical user interfaces.

Apple’s obsession with skeuomorphism reaches into the tiniest of details. If you have an iPhone with iOS 6, launch the Music app and take a look at the volume knob:

If you tilt the phone on its axis, left to right, you will see the reflection effect on the knob change, as if it were a physical one. The gyroscope inside the phone is used to detect the motion. It’s nearly impossible to spot, yet someone at Apple went out on a limb to program this into the interface. Steve Jobs was a fan of skeuomorphism and Scott Forstall, the Head of iOS design, was a strong supporter. But there’s been an ongoing debate about this inside the company for some time.

Yesterday, Apple fired Scott Forstall. He’s taking the blame for the iOS 6 Maps fiasco, but the implications on graphic design are interesting. Guess who’s been appointed to replace him on the non-business end of his responsibilities? Jony Ive, Apple’s chief of industrial design. Now he’ll be in charge of designing not just the products, but even the graphical elements of the software that runs on them. Ive’s design philosophy is one of purity and simplicity: «We try to develop products that seem somehow inevitable. That leave you with the sense that that’s the only possible solution that makes sense», he says.

Wether you like the design of Apple gadgets or not, you must agree that it’s one of the key factors to its dominance. And the credit is all Ive’s. Before he came around, phones looked radically different. Computers were unappealing beige boxes. His first iconic creation was the original iMac, which came in a variety of bright colors and had a curious handle on top:Walter Isaacson notes in his book that is was “more playful and semiotic than it was functional”, and quotes Jony Ive on its purpose: «Back then, people weren’t comfortable with technology. If you’re scared os something, then you won’t touch it. I could see my mum being scared to touch it. So I thought, if there’s this handle on it, it makes a relationship possible. It’s approachable. It’s intuitive. It gives you permission to touch.
It gives a sense of its deference to you».

The design principles established by Apple now dominate technology, to the point that most other players in the field are very happy to just be copycats. This is most apparent on hardware, but software isn’t immune. Here’s the telephone icon from iOS:

It’s nearly identical, with very minor variations, on every other smartphone operating system, including Android. Somehow, Apple has decided that the telephone function must be identified with a white phone handle on a green background, and everyone else has just followed suit. Interestingly, it’s skeuomorphic and it refers to an outdated design for telephone handles. But while you’d be hard pressed to come up with a sensible alternative, does iBooks really need to resemble a wooden bookshelf?

Doesn’t this sacrifice functionality in some way? And why are most icons for voice recording shaped like either a classic studio microphone or an old tape, items that most people, especially youngsters, might have never seen in their lives? Skeuomorphism made a lot of sense when computers first came around: it gave people a quick way to grasp the functionality of otherwise obscure buttons or applications.
But do we still need that?

Given Ive’s track record in influencing the design of everyday things, it’s reasonable to imagine that he might apply the same mojo to UI elements. In other words, we may be on the brink of the greatest revolution in interface design since the inception of computers. Microsoft has just launched a new version of Windows that radically does away with the past and contains little or no trace of skeuomorphism. If Apple does the same they might become copycats themselves, but the impact of such a decision could resonate even more greatly.

The Plastic Brain of Taxi Drivers

What’s special about London taxi drivers? They have enlarged brains.

They’re not mutants. If you want to become a cabbie in London, you have to undergo a daunting test called The Knowledge. To pass, you must be able to plot the shortest route between any two of the city’s 25,000 streets, and point out any relevant landmark along the way (there are about 20,000 in all). Preparing for this mind-boggling endeavor takes three to four years, spent mostly driving around in a scooter with a map placed on the handlebars – remember that if you spot one in the city, it makes for a good story.

Fewer than 35 percent of applicants are granted a license, not surprisingly. What’s surprising is that London cabbies are responsible for disproving one of the longest-standing foundations of neuroscience: that the brain, unlike other organs, stops growing shortly after childhood and is incapable of spawning new neurons. A study conducted on 79 training cabbies had them undergo an MRI scan after three years of learning London topography. Their posterior hippocampus, an area of the brain involved with spatial navigation, was found to have acquired additional grey matter: it was, on average, 7 percent larger than before the training.

This phenomenon, the brain’s ability to rearrange itself and change its physical structure, is called neuroplasticity. It’s one of many reminders of how little we know about the human brain. Recent research from Sweden shows that this growth is linked to specific types of activity. This time, MRI scanners were used on interpreters learning a new language from scratch and comparing them to cognitive science students: while organs in the control group remained unchanged, over just three months the interpreters showed growth in the hippocampus (again) and in the cerebral cortex, which is quite understandably involved with language. This confirms previous research that revealed how bilingual children have superior brain functionality in some areas, and how being bilingual can delay the onset of degenerative brain diseases like Alzheimer’s.

It seems you can teach an old dog new tricks. But when it comes to the brain, not everything has a tangible effect. What about those Brain Training games, then? Nintendo and other companies maintain that by playing them regularly you can “keep your brain young”, citing dubious research. It’s a good marketing effort and by no means the worse kind of manipulation of science, but sadly it is not true. Christopher Chabris and Daniel Simons dismiss the issue in their brilliant book, The Invisible Gorilla: «If you think that doing Sudoku will keep your mind sharp and help you avoid misplacing your keys or forgetting to take your medicine, you’re likely succumbing to the illusion of potential. Unfortunately, people who do more crosswords decline mentally at the same rate as those who do fewer crosswords. Practice improves specific skills, not general abilities».

In other words, what you get by playing Brain Training is that you get better at Brain Training. But there is a very easy way to improve your mental abilities, and it’s got nothing to do with puzzles. It’s called exercise. Engaging in physical activity increases production of a protein that keeps nerve cells healthy, giving you better mental skills. This has been proven by different studies on humans and rats, as you can read in this New York Times article, opened by a very odd illustration.

For both rodents and men, walking or running for just a few hours a week improves cognitive functions and, of course, physical fitness. And people who exercise actually have larger brains in later life. On the other hand, data reveals that sitting for more than three hours a day can shorten your life span by as much as two years. So, say goodbye to “Dr Kawashima” and get out of that chair. Your brain will be grateful.