100 million years of foo.., p.4

100 Million Years of Food, page 4

 

100 Million Years of Food
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  



  Moreover, in countries where coconuts were a staple in the traditional diet, chronic diseases only became prominent after Western foods and lifestyles were introduced (and coconuts phased out). For example, among the Tokelau Islanders in the South Pacific, the historical diet consisted primarily of coconuts, fish, and breadfruit. It was a high-fat diet: Over half of the calories came from fat, mainly saturated fat (roughly one-third of coconut flesh is saturated fat).24 As the population of these atolls increased, the New Zealand government offered to resettle the Tokelau Islanders in New Zealand. About half of the Tokelau Islanders took up the offer and left for the mainland. Their new diet now included sugar, flour, bread, potatoes, meat, chicken, and dairy products. The result in the migrating population was an increase in obesity, type 2 diabetes, heart disease, gout, and osteoarthritis, even though fat intake actually declined after the move to New Zealand. On the other hand, daily sugar intake increased, along with carbohydrate and alcohol consumption. Among those who stayed on the Tokelau Islands, new European foodstuffs were also added to the diet, and rates of chronic diseases also increased, but not to the same degree as among the migrants.25 Like olive oil for those living in the Mediterranean, coconuts make sense as part of a South Pacific or South Asian cuisine; the high fat content of coconuts complements lean fish and a largely vegetarian diet. Removing the anchoring effect of coconut invites dietary abuse in the form of novel fatty or oily substitutes such as fried foods, which are a known risk factor for diabetes and inflammatory disease. As will be discussed later, fried foods contain trans fats and AGEs (advanced glycation end products) and have a high glycemic index, which are novel and harmful characteristics in the human diet; coconuts contain saturated fat, a substance that our ancestors had moderate but steady exposure to over millions of years, mainly in the form of animal fats.

  Bajish and I travel with a medical convoy into the hills around Kerala. The ethnic tribal people whom we chat with often use coconut in their diet, but obesity, heart disease, and type 2 diabetes are not medical issues among them. We also note their vigorous lifestyles, how they work the land with hoes and their hands and walk long distances to get around, in contrast to using motorcycles like the majority of Keralans. Kerala has among the best roads and highest income levels in India, but also the highest levels of type 2 diabetes. Petrol is heavily subsidized by the government, making it even easier to ride rather than walk. The risk of diabetes is strongly linked to a decrease in physical activity rather than to coconut in the diet.

  *

  Another key fruit of contemporary Keralan cuisine, chili, has also been viewed with suspicion by Western-trained nutritionists. The spiciness of chilies comes from the peppers’ store of capsaicin, a chemical compound employed with excruciating effectiveness in pepper sprays (some spider venoms work through the same pain channel).26 Chili plants seem admirably protected against predators, which might seem like a straightforward chapter out of plant evolution, but the saga of chili is wrapped in enigmas.27 For starters, chili plants retain their pain-inducing capsaicin protection even after the fruits mature; most plants, by contrast, reduce toxins and make their fruits tasty at that stage, to invite animals to eat the fruits and spread the seeds widely. Also, not only have we humans come to enjoy chilies, many people seek out the wickedest varieties (XXX!, the hot sauce bottle labels trumpet, as if parading the temptations of adult entertainment). Why do humans enjoy the pain inflicted by chili-protecting capsaicins, the only mammals known to do so?

  The most popular explanation of why we enjoy chilies is that their capsaicin compounds kill off fungal infection and other microbial invaders, and thus we come to enjoy chili dishes because we don’t get sick from eating them. If this explanation is true, it would put chilies in the company of a long list of spices that humans use not only to perk up dull dishes but also to keep meats and sauces from spoiling (and people from throwing up and running to the toilet). When Paul Sherman, a biologist at Cornell University, and his then-student Jennifer Billing looked at spice usage from recipes around the world, they found that hotter countries used more spices. This makes sense, since increased temperature boosts bacterial growth and encourages food spoilage, thus making the need for spices more urgent. In particular, three powerful spices that inhibit many varieties of bacteria are more frequently called for in the dishes of warm regions. The knock-’em-dead spices? Most likely they are familiar to you and are tucked away in your kitchen cupboards right now: garlic, onion, and of course, our favorite sadomasochistic temptress, chili.28

  However, there are some gaps in this explanation. The bacteria-busting hypothesis doesn’t explain why chili seasoning is becoming popular in countries where food safety standards are high and food poisoning incidents are low, or why some countries that are geographically close to each other, such as Japan and Korea, have different levels of spiciness (Japanese food is considered relatively bland, whereas Koreans use chili in almost all of their dishes). If bacterial warfare were the only basis for eating spicy foods, then humans would get addicted to irradiated or canned foods, which seems not to be the case. The hypothesis also fails to explain why people steadily become addicted to eating chili, requiring ever greater amounts to feel satisfied. In fact, the more one looks at the behavioral pattern of chili consumption, the more it resembles thrill-seeking or recreational drug use.

  Paul Rozin, a psychologist at the University of Pennsylvania, has suggested that humans are hardwired thrill-seekers, and we therefore enjoy blistering our tongues in the same way we (or some of us at least) savor a stomach-churning roller coaster ride and other forms of voluntary terror.29 While equating roller coasters with chilies (and perhaps by extension garlic and onions and other spices) seems a little strange at first, back in the 1970s and 1980s, an American psychologist, Richard L. Solomon, pointed out that positive and negative emotions tend to come in pairs. When people are struck by lightning, survivors first experience terror, then elation. A similar thing happens to parachutists, who experience terror as they plunge through free fall; after landing, they warm up to a feeling of elation. People who take sauna baths go through an analogous sequence of discomfort followed by relief. The reverse is also true: When Solomon allowed babies to suck on a plastic nipple, they cried when the nipple was taken away. Solomon gave the article announcing his theory the clunky title of “The Opponent-Process Theory of Acquired Motivation,” but fortunately he found a memorable subtitle: “The Costs of Pleasure and the Benefits of Pain”; that is, positive experiences are invariably followed by a drop in mood, while painful experiences are followed by relief.30 Solomon argued that over time, the pain and the relief paired with it both diminish, so a person is compelled to repeat the experience with gathering intensity, resulting in addiction to mildly painful experiences.

  Although psychologists today view the Solomon hypothesis as too simplistic to describe drug addiction behavior, it may help to explain the pleasure-pain paradox of spices. Ingesting chili is initially an aversive experience, but at small doses, the pain fades and a pleasurable state arises afterward. Other spices, perhaps many, have the characteristic of being initially distasteful but pleasing afterward. Not all aversive foods have these tendencies, though; for example, getting sick from food poisoning produces a prolonged period of nausea that no one wants to reexperience.31

  We have addressed only one part of our original dilemma over spices. The second question remains: Why do humans alone come to enjoy mildly aversive experiences like chilies (and parachuting)? One possible answer is that humans are masters of gratification delay and brain rewiring. With practice, the discomfort of jumping out of an airplane, climbing onto a stage before crowds of thousands, or chomping on chilies gradually eases; however, so do the hits of pleasure, and thus the ever-increasing need for more punishment and more pain.

  In other words, even though food that is spicy has antibacterial properties, we may eat these foods not to avoid getting sick, or even because they taste good at first, but primarily because they induce a paradoxical hit of pleasure after the displeasure; the benefits of pain, Professor Solomon might have observed. One consequence of his theory is that it explains why tropical cuisines tend to be spicy: The lack of meat in them, especially fat, makes it necessary for cooks to drop in dollops of spices, to increase the feeling of pleasure that fat and meat would otherwise induce. When I lived in Korea, cooks who saw me about to ladle a spoonful of rice and vegetables without adding red chili paste cried out in horror, seized the nearest bottle of chili paste, and tried to squeeze it over my bowl, because they assumed that my meal would taste bland, but I had not been desensitized to chili by that point, so in my view, the pain did not merit the pleasure. Solomon’s theory also helps explain why Japanese and Korean cuisines differ so much in their spiciness. As an isolated, fertile island surrounded by rich coastal waters, Japan historically had access to much higher levels of animal flesh, compared to peninsular Korea, and Japanese food therefore requires relatively small amounts of mustard (wasabi) compared to the full-force application of chili in Korean dishes. The same situation could apply to England, with its relatively spice-light and meat-heavy fare, and France, with its more flavorful but less meaty cuisine. The fact that spices inhibit bacteria would certainly have been helpful in promoting their adoption, but this may be an additional rather than the sole reason they’re so widely used.

  It seems logical that spicier, more flavorful food would make us fatter. However, chili may make people lose weight, by increasing metabolism, body temperature, and the burning of fat.32 These weight-sloughing effects are modest unless chili is eaten in large doses, though, which limits its usefulness for populations unused to chilies, such as in the United States, Canada, and Europe. By contrast, in one Mexican study, the average person ate the capsaicin equivalent of seventeen jalapeño peppers a day. Unfortunately, there is some evidence that eating copious quantities of chili could increase the risk of stomach, liver, bladder, and pancreatic cancer. Scientists at Kyoto University have developed a new variety of chili, CH-19 Sweet, that could offer the health benefits of capsaicin without the pain.33

  *

  Between 40 million and 16 million years ago, something curious happened to our ancestors: Our uric acid levels started to rise because our ancestors progressively lost the genes for manufacturing uricase, the enzyme that helps dispose of it. Uric acid, a by-product of a diet rich in purines (organic compounds found in seafood and beer) and fructose (the sugar in fruit), can be a very inconvenient, nasty substance. It’s responsible for causing gout, a debilitating condition in which crystals build up in a sufferer’s joints. As a result of losing the ability to manufacture uricase, humans have uric acid levels three to ten times higher than other mammals and unfortunately a greater predisposition to gout and possibly hypertension. The loss of uricase over millions of years of evolution is one of the greatest unsolved mysteries in the evolution of the human diet. Because high uric acid levels are dangerous to health, it’s extremely puzzling that our ancestors progressively shed the ability to deal with uric acid. Like losing a kidney or lung, it may not be fatal, but it’s a considerable inconvenience. Why did our evolution take us down such a hazardous path? Around 70 percent of our uric acid is resorbed by our kidneys, not excreted, evidence that uric acid must have some positive role in the human body, rather than simply being a nuisance by-product of purine as scientists had formerly believed.

  Many hypotheses regarding the function of uric acid have been proposed. One suggestion is that uric acid helped our primate ancestors store fat, particularly after eating fruit. It’s true that consumption of fructose induces production of uric acid, and uric acid accentuates the fat-accumulating effects of fructose. Our ancestors, when they stumbled on fruiting trees, could gorge until their fat stores were pleasantly plump and then survive for a few weeks until the next bounty of fruit was available. The problem with this theory is that it does not explain why only primates have this peculiar trait of triggering fat storage via uric acid. After all, bears, squirrels, and other mammals store fat without using uric acid as a trigger.

  Some researchers argue that the elevated levels of uric acid that accompany gout could have been a survival advantage in ancestral environments that were arid and where food was scarce, because high uric acid levels are associated with increased blood pressure (which is dangerously lowered when salt is scarce) and a greater tendency to deposit fat. Uric acid could have helped to maintain adequate blood pressure in a low-sodium fruit diet and during an interval when Earth’s climate was drying out and hence salt loss through sweat could have been a problem.34 However, mammals that thrive in arid environments, like camels and desert mice, seem to do fine without elevated uric acid levels.35 Other mammals also subsist on fruits, but primates are the only animals known to have lost uricase. According to yet another hypothesis, primates are pretty smart creatures, and most of them lack uricase, so therefore uric acid must be responsible for their increased intelligence. While it’s true that higher levels of uric acid have been found to protect against brain damage from Alzheimer’s, Parkinson’s, and multiple sclerosis, high uric acid unfortunately increases the risk of brain stroke and poor brain function.

  Those trying to solve the mystery of this trait in human history try hard to recast symptoms of high uric acid as being beneficial in our past. This is a common tendency in evolutionary theorizing; people try to find an evolutionary reason in facts that may actually be by-products of evolution. The cognitive scientist Gary Marcus labeled such evolutionary by-products as “kluges”; some aspects of our bodies, like bad backs, arose because something else had evolved—walking upright, in the case of bad backs—and we humans got stuck with the accidents of history.36

  A more realistic proposal for the evolution of uric acid has this character of kluginess. After several million years of not producing vitamin C in fruit-rich rainforest environments, our primate ancestors had no way of reevolving this ability because too many mutations had accumulated in the original vitamin C–synthesizing genes over the long period of disuse; like a car engine too long unused, vitamin C synthesis could no longer fire up. As it happens, uric acid has chemical properties that permit it to function as an antioxidant.37 Therefore, the adoption of uric acid, a by-product of eating fruit and insects, was a possible second-best defense against oxidants. Indeed, higher levels of vitamin C result in lower levels of uric acid and diminished gout, possible evidence that vitamin C and uric acid are partial substitutes for each other.38

  Like any evolutionary adaptation, there were drawbacks to uric acid’s newfound role as an antioxidant. Exposure to high uric acid levels from overabundant fructose and purine consumption over several years results in insulin resistance, hypertension, and obesity-related disorders. In the ancestral environment, encountering significant quantities of either fructose or purine would have been rare. Today, fructose is plentiful, in the form of soft drinks and sweet, overdomesticated fruits like apples and oranges; purines are also common, found in seafood, meat, lentils, and other foods. A recent study also observed that high uric acid levels are associated with greater excitement-seeking and impulsivity, which the researchers noted may be linked to attention deficit hyperactivity disorder (ADHD).39

  Blocking the production of uric acid through drugs like allopurinol alleviates hypertension, at least with adolescents who are not too far down the path of uric acid–mediated damage. However, drugs that reduce uric acid may cause serious side effects, such as immune system reactions resulting in fever, rash, impaired kidney functioning, liver damage, and elevated white blood cell counts.40

  At this point, if you were to hand the script over to an imaginative sci-fi writer, he or she might suggest injecting people who suffer from high uric acid levels with uricases from nonprimate animals, or recreating our ancestral uricase on a computer, synthesizing it in a lab, and injecting it into patients.

  Truth is stranger than fiction: Researchers recently combined pig uricase, which is highly effective in breaking down uric acid, with baboon uricase, to lower the risk of immune rejection from human recipients. Although this pig-baboon chimera uricase was effective in reducing uric acid levels, it broke down very quickly in animal tissues and required chemical modification to become stable. Unfortunately, this modification also made the chimera uricase more likely to be rejected by human immune systems. Researchers then used computer programs to reconstruct a uricase that we last possessed 92 million years ago. The ancient uricase was synthesized in a laboratory using handy E. coli bacteria as surrogate mothers for the synthetic enzymes. When injected into healthy rats, the ancestral uricase was found to be a hundred times more stable than the chimera uricase, making it a promising candidate for drug development.41

  To put everything into perspective, fruits, like insects, were once an integral part of our evolutionary history and remained a valuable part of traditional diets. Even though meat provides virtually all of the nutrition necessary for survival, at certain times fruit could be crucial to human health, especially when fresh meat and its accompanying vitamin C were unavailable. For example, the Inuit living in Alaska, northern Canada, and Greenland made use of a broad variety of animal foods—seal, whale, walrus, caribou, polar bear, fox, wolf, Arctic hare, waterfowl, fish, mussel, sea urchin, and so on—but the Inuit also harvested a staggering variety of berries. These berries were critical; Inuit who lacked fresh seal meat could develop pustules when the berry crops ominously failed, as occurred in 1904–5 among the Greenland Inuit.42

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183