Tags: farming


Why do we farm?

(the conclusion of the farming series, http://shkrobius.livejournal.com/tag/farming)

Social Darwinism may explain the critical advantage of farming vs. hunting/gathering (by means of increasing one's fitness through the ability to shorten inter-birth intervals) but, of course, people did not begin farming for this arcane reason. This line of thought shows why farming is preordained by human nature, it does not explain in what sense it was ordained to appear only when and where it appeared.

Most of the theories for the origin of agriculture are absurdly materialistic; some overriding, visible gain is postulated as if this gain (that is obvious only retrospectively) was transparent to the early farmers. That's because the perspective is twisted. The modern age urbanites do not make the choice (imposed on the farmers during the periodic crop failures) between the lives of their own children and keeping the seed for the next season's planting. Only people who made such choices know what subsistence farming really was. Evolutionary ideas (like Blurton-Jones' feedback or Rindos' coevolution hypothesis) do not aspire to explain how could anyone make the choice in favor of plants over one's own dying children. The unbreakable bond to the land comes first; farming comes second. Farming is simply the way of staying on YOUR land. There is something one cannot walk away from, no matter the cost - except for the unimaginable duress.

We still have this bond to our ancestral lands, we still have this notion that these lands may be worth dying for regardless of the possible material gain to ourselves and our children. This bond is getting weaker, but it is still there. This is not a matter of economics, it is a matter of belief. Farming is the consequence of developing beliefs that completely change the relation between a man and the land. It is the material end of religion. The Neolithic Revolution was less a revolution in the material culture than the spiritual domain. The first farmers were the elect rather than selected.

This is the view of evolutionary biologist and etnobotanist Charles Heiser, who came to it after almost 50 years of disappointment with our mumbo-jumbo "scientific" theories (like the dump-heap hypothesis postulating that agriculture started near the dung and garbage heaps). Similar ideas were developed bu Jacques Cauvin, the archaeologist who for many decades excavated preagricultural Late Natufian sites in Syria. They both started with the purely materialistic view of plant domestication. They both rejected this view in the end. It is pointless to look for ecological, biological, and material. It might explain how and where. It does not begin to explain why.

Collapse )

Farming is a symbolic union of a man and the land that he fertilizes, first with the seed of the fruits of the earth and then with his own dead body. This union is the meaning of farmer's life and the meaning of his children's lives. This is why a farmer watches his children to die of hunger and then sows the saved seed into the soil; because such is the order of things. Before you farm, you first need to put things in that order.

Why do we farm?

PS: CB Heiser, "On plants and people" Ch. XIII
J Cauvin "The birth of the Gods and the origin of agriculture"

Social Darwinism & Baby Food

"Social Darwinism" is the term of abuse used (by those who, ironically, claim to be strong adherents of the Darwin's theory of evolution) against those who try to apply Darwinian ideas to understanding of the society. That is said to be grossly inappropriate. Surely, the Darwinian ideas may not apply to the society. There is no strong evidence that the Darwinian paradigm fully explains the historical development of humanity. Equally, there is no strong evidence that this paradigm fully explains natural history. There is no logic in adhering to one view and rejecting the other, but people are seldom logical about their beliefs. Darwinism is not adopted by the masses for its scientific merit. One needs only to ask a few questions to realize that the vast majority of folk "Darwinists" do not know what Darwinism is about. It is adopted primarily for its promise of g-dless Universe, which has nothing to do either with scientific reasoning or logic. Ditto for social Darwinism. On close questioning, the folks foaming at the mouth at the very mention of social Darwinism fail to formulate what precisely it is. It is impossible to pull out of them any coherent answer as to why Darwinian thinking cannot be applied to human society, or extract from them the specific critique of such socio-Darwinian explanations. Both the application and the critique are specialized fields of knowledge; the crusaders against social Darwinism have only their own conceited prejudice. I think that they should, at the very least, learn how effective Darwinian thought can be in rationalizing otherwise strange finds about human society and its history. But again, social Darwinism is not rejected because it is the idea wholly without merit. It is rejected because it annuls the promise of getting something for nothing and the dream of the endless free ride.

The mystery is compounded by the fact that Darwin's ideas are rooted in those of Malthus: natural selection needs attrition, part of which is due to the limited availability of natural resources. The usual point of criticism is that natural scarcity does not apply to ingenious humans set out on the path of unlimited progress and expansion. One has to be deaf and blind to make this argument in the 21st century. For example, the Green Revolution is likely to come to its end in < 100 years due to the exhaustion of mineable phosphate fertilizer
No one (including Life itself) has found a way out of this phosphate scarcity problem. There are water problems, salinization problems, soil erosion problems, fertilizer effluent problems, pollution problems: endless problems. The resources can be exploited more efficiently, but these are still limited resources, and their more efficient use only serves to increase the demand.

However, there is a greater problem. It is pointless to claim that if the Malthusian argument of natural scarcity does not apply to humans that somehow invalidates social Darwinism, because the original argument does not apply to nature as well. Darwinism alone does not explain "why is the world green," that is why most of the biocenoses are filled well below the levels suggested by the Malthusian scenario. Only a small fraction of the plant material is consumed. The expansion of rabbits is not checked by the availability of food, that’s false perspective. The bunnies starve amid the abundance, because their population growth is checked by the selection of plants they can eat. That, in turn, is limited by the kind of digestive system a small hopping herbivorous mammal can have. That, in turn, is limited by the basic bacterial, plant, and animal biochemistry that has been more-or-less settled 1 Gya. Where does Darwinism fit into this picture? Darwinism is not the answer to any question in biology; social Darwinism is not the answer to any question about human society.

What about the human society? Is Malthus correct? Do we operate well below the production capacity or close to the capacity? In the Malthusian scenario, it needs be close, as humans have been around for a long time, so the population is matching the availability of resources. If you look at the traditional agricultural society, this would indeed be the case. Malthus was not wrong.

This is the main problem of the agriculturalists: the crop yields fluctuate greatly, so simple prudence suggests that the optimum population should correspond to the one supported at the lowest range of these fluctuations. But agricultural society has different dynamics: it is chronically overpopulated and so there is great attrition when the crops fail. The only solution is finding more land, and here the inflated ranks of farmers do help as the numbers makes the warfare possible. Then it is resettlement and the story begins all over again. Naturally, it was expected that hunter-gatherer societies would be the same, but with the productivity lowered. Then, in the late seventies, it was discovered that this is not the case for many of the modern hunter-gatherers. Their societies operate significantly below the lowest limit of natural production, just like happens in the majority of biocenoses on land. Their population had stabilized at the level that affords elbow room for productivity fluctuations. This is not the consequence of high mortality as such, as it is offset by high fertility. The population CAN grow. But it does not. Malthus' ideas are based on his intuitive notion that the flesh is weak, that the population will grow until it is limited by the resource availability. But this does not follow from anything but his own observations, and those were of an agricultural society. His error was in assuming that this peculiarity should be universal.

EN America provides an example of practicing "weed agriculture" (see the previous post) that never resulted in large population density or significant population growth. Cereal crop farmers were involved in a vicious circle: large sedentary population required intensive agriculture that in turn demanded a lot of hands. All of that was for no real advantage: there is nothing advantageous (in any sense) in the increasing numbers. This is something that many people fail to grasp: increasing the number of offspring provides no increase in fitness because for that you need differential survival. Normally, it is provided by attrition. If everyone around is breeding at the same rate, the fraction of one's genes in the pool is not increasing. From the Darwinian standpoint increasing population makes sense only under certain scenarios (like elimination of the neighbors using numerical majority).

The individual fitness advantage is indirect; evolution is supposed to be driven primarily by this individual advantage. Naturally, no one knows how such below-the-limits situation can maintain itself. Initially, people have nurtured dark fantasies about infanticide and genocidal warfare, but ethnographic data suggested that neither playedsignificant role. So far, the only answer has been... the very social Darwinism that is proclaimed to be an aberration of reason. The idea originally came from Blurton-Jones' studies of the Bushmen and !Kung san in Africa. It is Darwinian thinking at its best: the emergence of a complex stabilization mechanism from the pursuit of individual benefit.

...Lack of starvation among groups such as the !Kung san implies a stabilized population size despite a potential fertility rate of about 10 - 15 children for a woman over her reproductive period if there were no behavioral mechanisms negatively affecting her fertility. Even when this potential fertility rate is coupled with the high mortality rates experienced by hunting and gathering societies (upwards of 50% mortality before adulthood followed by relatively low mortality rates during the reproductive years), the consequence would still be a rapidly growing population capable of outstripping food resources within a few hundred years.

…Evidently groups such as the !Kung san are engaging in behaviors that have the effect of limiting population density since empirically it is the case that their population density remains below the point at which stochastic variation in resource availability would otherwise lead to starvation. The vast majority of hunter/gatherer populations are and were maintained well below carrying capacity. This raises the question: Are groups such as the !Kung san consciously engaging in population limitation and monitoring current population size against resource availability, or is the limitation a property arising from behaviors whose motivation lies not at the level of population regulation but at the level of self-interested decision making? Ethnographic data on the !Kung san argues against the former. If so, then it is of interest to know what minimal specification of culturally distinguished behaviors will have population limitation as an emergent

...!Kung san women space children approximately 4 years apart with extended nursing suggested as the primary mechanism for birth spacing. Some researchers have taken the fact of spacing – assumed to be independent of population density – as a datum to be embedded into an appropriate theory about behavior such as selection for behaviors that maximize Darwinian/inclusive fitness via an optimization strategy based on energy expenditure. The selection argument is based on the observation that the marginal increase in parenting effort (measured by transportation cost of carrying children while gathering) when she has another child is minimized with 4 year spacing. http://www.sscnet.ucla.edu/anthro/faculty/read/PDF_Files/Conferences/Read_EmergenceFinal.PDF

...Given the observed features of !kung ecology, shorter IBIs (inter-birth intervals) require a mother to carry much greater loads (backload) of baby and food on her foraging trips. Calculating this load for each IBI shows a sharp upturn in backload when IBIs fall below 4 years. It was suggested that sharply increased backload might lead to such a severe increase in mortality of offspring that the observed 4-year IBI actually maximized the number of offspring that the woman raised. This proposition is tested in this article using reproductive histories of individual !kung women collected by Howell. Backload calculated in the previous study is the best predictor of the observed losses of children at each IBI. Furthermore, mortality of the children is indeed so sharply related to IBI that the 4-year IBI appears to be the interval that maximizes reproductive success for most women.http://www.kli.ac.at/theorylab/AuthPage/B/Blurton_JonesNG.html

...Nicholas Blurton Jones formulated the birth-spacing problem using an optimization model, assuming the "goal" = maximizing yield of surviving offspring (i.e., maximizing fitness). This is not meant to be conscious goal (though worth noting that is closer to expressed preferences of !Kung, who say they value having lots of children, than is homeostasis or load-minimization); rather, it is envisioned as the expected outcome of design by natural selection, regardless of what proximate mechanisms bring it about. Blurton Jones used Nancy Howell's demographic data on individual life histories (IBIs, child deaths) to conduct direct test of this optimization model. Results demonstrate a detailed confirmations of the model predictions, of which simplest to explain = match between calculated optimal IBI (48 months) and modal IBI (most empirically common interval). This match is the result of a correlation between IBI and child mortality: if births are more closely spaced, offspring mortality rates increase so much that fewer children actually survive, just as Lack's model predicts

...Some additional hypotheses supported in Blurton Jones' study include: 1) child mortality is not correlated with IBI for cattle-post women (who have much lower mobility costs) and 2) death of unweaned infants leads to shortened IBI, but death of older child doesn't (because no effect on lactation, and little effect on carrying cost to mother). These results are not consistent with the idea that long IBIs are a form of population regulation for the benefit of the !Kung as a whole (and in fact, even with these long IBIs !Kung population was growing slightly); instead, they better fit the idea that they are a form of individually-adaptive reproductive restraint. Paradoxically, by spacing births widely, !Kung women are maximizing number of children they rear; higher birth rates, by disproportionately increasing mortality, would actually lead to decreased population

...!Kung birth-spacing has multiple determinants; besides proximate physiological and behavioral ones (carrying effort, lactational amenorrhea), there are also social and ecological ones: Gender division of labor (females responsible for gathering & child care; no real cooperative child-tending or communal nursing). Kid's reliance on parents for food and supervision (possibly related to ease of getting lost in bush).

Having more offspring does not increase fitness, because closely spaced children have lower chance to survive to their reproductive age as their mothers care for their younger siblings. The problem is less parental care as such as low availability of particular foodstuffs: soft, nutritious food for the young. The adult hunter-gathers do have plenty of food for themselves, but the young do not let them roam freely and the praised soft foods suitable for children are generally less available than other foods. The population growth is limited through this negative feedback; the availability of certain foods whose supply is limited matters most. This is a Darwinian mechanism as the advantage is strictly individual. No one stands in the way of dividing labor (having non-weaning individuals collecting soft foods for the children of the others), but they would not do it because this does not increase their fitness. Such division of labor makes no sense excluding the cases of very close interrelatedness.

From this perspective it becomes clear what might have been the fundamental problem with the advent of cereal agriculture and subsistence farming. By itself it did not lead to the immediate population explosion until it combined with another fateful invention, pottery. After that, the production of soft foods (gruels, porridges) year around became possible, and that caused the Neolithic revolution in baby feeding. The game was redefined. The important natural constraint keeping the population in check had been removed. The fertility and mortality rates were still high but for the first time the births could be closely spaced which gave the human breeding machines a temporary fitness advantage. The gains were short-lived but it started the runaway cycle of intensive subsistent farming and dense settlements, after which the collective advantage had finally emerged: the swelled ranks of the farmers swept the land clean from the hunter-gatherers...

This is one of the clearest explanation of the origin of agriculture, which makes a lot of sense to me. It is pure social Darwinism going beyond the Malthusian logic by underpinning this logic as applicable to a particular society and pointing how its specificity has emerged. These ideas have been discussed for many years, and there are many follow up studies, some supporting this scenario, and some contradicting this scenario. It is the normal science based on observations and generalizations. Nobody points fingers and shouts "social Darwinism." It is understood that arguments have to be substantive and constructive. But, after 30 years of scrutiny no one suggested a better explanation. The social Darwinism (it is now called socio-behavioral Darwinism) is in the similar situation as the Darwinism in general; it is a powerful way of thinking that proved its worth, but is not without problems. It explodes decades of "progressive" thinking about the early stages of human history - generated in advance of any serious studies - starting from Condorcet.

Survival of the fittest?

PS: First Farmers: The Origins of Agricultural Societies, by Peter Bellwood


The American Way

Then, churls, their thoughts (although their eyes were kind)
To thy fair flower add the rank smell of weeds:
But why thy odour matcheth not thy show,
The soil is this, that thou dost common grow.

Sonnet 69

Agriculture originated independently in several places; chronologically, the last of these places was eastern North America (ENA). The type of agriculture practiced on the plains was so unusual that only in the 1980s was it realized WHAT were the Indians farming since around 4500 BP. This agriculture was fully blown between 250 BC and 200 AD, with large farming communities (Hopewell tradition, http://www.cabrillo.edu/~crsmith/hopewell.html). Around 800 AD this tradition was gone. The import crop (maize) from the south completely dominated farming from that time onwards.

Farming is growing plants, right? If the rainfall is plentiful, the plants grow quite by themselves -- so why do crops need be tended? Specifically, why not just sow the seed, abandon the field, go hunting buffalo and collecting nuts for the summer (these were the food staples of the Indians), and then return to harvest? In this way you can have a lot of crop fields and thereby hedge one's bets against famine, escaping the common ordeal associated with committed subsistence farming.

As everyone knows, this approach does not work. The crops are free lunch for two-, four-, and six-legged critters. Furthermore, in no time the field will be choked with weeds. You cannot walk away from your field and expect plentiful harvest in the fall. Or can you? If you cultivate cereal grasses, you certainly can't. If you cultivate certain tubers, it may work, though not too well. For example, cassava has cyanogenic glucosides, so its tubers need soaking in water to avoid HCN poisoning. For that reason it is avoided by animals. For the latter reason (plus its nutritional value as starchy food) it has been chosen by farmers who are not particularly keen on exerting themselves. This is a telling example. A lot of pre-Columbian American crops are also loaded with toxins: tomato, potato, hot peppers. If you push this approach to the extreme, you will realize that the best crop to cultivate (if you can get away with it) are weeds, the very things that are combatted in normal farming. Weeds do not need being tended. So the dilemma of farming without toil boils down to the following question: is it possible to subsist on weeds? The answer is yes, it is possible.

EN Americans were unique in developing just such agriculture of weeds. Even the names of the plants they cultivated are telling: sumpweed, pigweed, mintweed, goosefoot, maygrass, knotweed, little barley. You would not like these plants anywhere near your lawn, not to mention your crops. Sumpweed (marsh elder), for example, is a relative of ragweed and its pollen is as allergenic. The plant itself irritates skin; only its oily seeds are edible. Maygrass has tiny greenish seeds loaded with bitter tannins. The rest of them look like... regular weeds. That's because these plants ARE weeds. The only exceptions are sunflower and pepo squash. Well, the sunflower is not considered a weed only because we cultivate it. But, of course, it is a weed. What makes a weed a weed is the ability to kill or slow down other plants: this is called allelopathy. Among cereal grasses only a few cultivars of rice have medium-to-strong allelopathy; barley, oats, and sorghum have weak allelopathy properties, other crops have none. By contrast, sunflower and the rest of EAN weeds are veritable allelochemical factories.

The general approach is to release secondary metabolites into the soil. These interfere with the growth of other plants. The way of doing that can be very ingenious. For example, a relative of cabbage, garlic mustard, produces a chemical that interferes with signalling between the roots and mycorrhizal fungi that deliver minerals to these roots. More common is the release of quinoid molecules messing with the functioning of Photosystem II in higher plants by substituting plastoquinone and disrupting electron transport. Other favorite approaches are interfering with ATP synthesis and night respiration. Over the millions of years, the weeds have perfected their phytotoxins to deal with a wide variety of plants.

The main weedy crop of EN Americans was goosefoot, a chenopod. Its South American relative is quinoa, and it is still being farmed extensively (it is sold in organic food stores). A while ago it was second only to potato as a staple crop, besting maize by a healthy margin. Not only is it a weed, it has two other desirable properties. First, its seeds are inedible, as these are coated with a thick layer of saponins. The saponins are very bitter and mildly toxic, and they also have strong anti-fungal properties. The animals, in particular bugs and birds, do not eat such seeds. Yet the saponins can be removed by prolonged soaking of the seeds in water periodically replacing the water. If you have a lot of water around, here you go: The weed chokes other plants. No animal, regardless of the number of legs, is keen on eating the seeds. When the seeds are harvested, storing the seed is unproblematic, as it does not rot and animals want none of it. All you have to do is to remove the saponins when you need the seeds and immediately consume it. As it is a weed rather than a cereal, it is gluten-free and it has more protein than wheat. It also has lysine; it is a much more balanced food. It looks a bit like lentils. Wiki relates the following touching story about quinoa:

...The bitterness has beneficial effects during cultivation, as the plant is unpopular with birds and thus requires minimal protection. There have been attempts to lower the saponin content of quinoa through selective breeding to produce sweeter, more palatable varieties. However, when new varieties were developed, native growers rejected the new varieties despite their high projected yields; because the seeds no longer had a bitter coating, birds had consumed the entire crop after just one season.

They love it because you can plant it and forget about it. But that's not all. This weed has another layer of protection. The weak point of many other weeds is that their leaves are vulnerable to insect larva. The goosefoot loads its leaves with oxalic acid to such a degree that almost no insect can eat the leaves. The goosefoot NEVER fails. It is resistant to draught, heavy rainfall, scorching sun - well, it is a weed, after all. You can plant this weed, walk away from your field, and come in the fall. There will be no other weeds there, and the animals will avoid this field by a mile. It was the perfect solution, except for two minor problems.

Problem one is that you need a lot of water to process the seeds. Around 600 AD there was considerable climate change; water became less abundant. Problem two is that despite the millenia of cultivation the size of domesticated seeds in EAN did not increase, unlike in every other agricultural center. It is this oddity that delayed the discovery of this queer agriculture. The domesticated and wild varieties look almost identical in size and nutritional value. The Indians did not care to increase the productivity of their fields of weeds, because there was no need for more productive crops. The population density never increased much. There was no runaway growth when more people are needed to tend crops and more crops are needed to sustain these people. And it worked like that for 3500 years. It is still not clear why it suddenly stopped around 800 AD. The whole Hopewell tradition had collapsed continent-wide, for reasons unknown. The cultures that replaced it have been influenced from the south. The weed agriculture disappeared.

So when people tell that agriculture means sedentism, high population density, famine, progress, civilization, and what not - this is simply not so. None of these things are a given. Chenopod plants grow across the whole world, Australia included. Weed agriculture could've been practiced everywhere, but that did not happen. In the end, even the noble Indians succumbed to the life of toil and privation, becoming the slaves of their plants.

Why aren't we farming weeds?

PS: Most that is known about weed agriculture of ENA Indians comes from the studies of BD Smith
Chenopodium as a prehistoric domesticate in eastern North America: evidence from Russell Cave, Alabama.
Origins of agriculture in eastern North America.
Eastern North America as an independent centre of plant domestication.
See also Charles Heiser's classic "On plants and people," Chapter 4 (this is one of my favorite books - amazingly clever and funny - written by a terrific scientist.


Over the rainbow

cont. from http://shkrobius.livejournal.com/242806.html

The transition to agriculture/pastoralism was a plunge into the unknown. There is no evidence that pre-agricultural people had ever experienced famine or periodic starvation. Diamond had an excellent essay on this very topic, http://www.ditext.com/diamond/mistake.html
and the subsequent research only strengthened the case. Subsistent farming was a disaster.

...in most parts of the world, whenever cereal-based diets were first adopted as a staple food replacing the primarily animal-based diets of hunter-gatherers, there was a characteristic reduction in stature, an increase in infant mortality, a reduction in lifespan, an increased incidence of infectious diseases, an increase in iron deficiency anemia, an increased incidence of osteomalacia, porotic hyperostosis and other bone mineral disorders and an increase in the number of dental caries and enamel defects. In a review of 51 references examining human populations from around the earth and from differing chronologies, as they made the transition from hunter-gatherers to farmers, there was an overall decline in both the quality and quantity of life. http://www.direct-ms.org/pdf/EvolutionPaleolithic/Cereal%20Sword.pdf

Cereals as staple food are extremely problematic: biologically, we are not equipped to subsist on this kind of food, as for 150 kyr the cereals were only a small part of the diet. The textbook examples of this incompatibility are celiac disease due to the exposure to gluten and lactose intolerance. The less widely known are the effects of phytates (that cause disbalances of Ca, Mg, non-heme iron, and other microelements, resulting in a host of diseases) and the suspected connection to cystic fibrosis (the defective gene bestowed a degree of lactose tolerance, see http://www.nature.com/ejhg/journal/v15/n3/abs/5201749a.html).

Cereal grasses (in order to avoid being eaten by herbivores) produce antinutrients that are messing with the digestion system of mammals. Committed herbivores can cope with these toxins and antinutrients after millions of years of co-evolution (e.g., they have gut bacteria splitting the phytates). We cannot. The worst is subsistence on whole-grain unleavened breads contributing more than 50-60% of the calories; this leads to

...rickets, retarded skeletal growth including hypogonadal dwarfism, and iron-deficiency anemia. The main lectin in wheat (wheat germ agglutinin) has catastrophic effects upon the gastrointestinal tract. Additionally, the alkylrescorcinols of cereals influence prostanoid tone and induce a more inflammatory profile, as well as depressing growth. Simoons classic work on the incidence of celiac disease shows that the distribution of the HLA B8 haplotype of the human major histocompatibility complex (MHC) follows the spread of farming from the Mideast to northern Europe. Because there is strong linkage disequilibrium between HLA B8 and the HLA genotypes that are associated with celiac disease, it indicates that those populations who have had the least evolutionary exposure to cereal grains (wheat primarily) have the highest incidence of celiac disease. http://www.beyondveg.com/cordain-l/grains-leg/grains-legumes-1a.shtml#intro

This is the kind of food on which our farming ancestors were living for centuries. Switching to agriculture was human breeding: selecting those tolerant of toxins, systemic malnutrition, periodic famine, vitamin and micronutrient deficiencies, those capable of surviving drastic increase in infectious disease and decreased life span, the endless toil, full-out warfare, and other attributes of subsistence farming and civilized life. We are the descendants of these early farmers, those few that walked over the rainbow. It is their voyage that shapes us.

Farming required a lot of quick fixes. Perhaps greater intelligence is one of these fixes. We think of it as an increased ability, but this is like calling "lactose intolerance" a deficiency. It is not a deficiency, it is the dominant trait. Over 95% of human history the deleterious mutation that gave us lactose tolerance was not required. Higher intelligence and abstract thought were not required either. The Pygmies of Congo with the average IQ below 60 had all of the intelligence that is needed to live in the jungle. No more was needed. The need had to emerge first.

I am not sure that it is our need. It could be the need of cereal crops. On our own, we were content with a fraction of what we have today. The forbidden fruit in the garden of Eden is said to be an apple. In Latin, "malus" is both an apple and evil. It is a play on the original Hebrew: "khitah" is wheat and khet is "sin." The Gemara says that the forbidden fruit was wheat...

Who is gardening whom?


The point of having leaves is collecting light for biosynthesis rather than forming huge apical buds (= cabbage heads) doing nothing. Some people have strange theories about cabbages, like that the heads benefit the plant by retaining water in a manner of cacti. This is like saying that cystic fibrosis favors its victim by allowing them to retain mucus. The leaves are supposed to evaporate water rather than "retain" it when there is no draught.

So, why do cabbages look like cabbages? The standard story is the selection by farmers: it is the manifestation of human ingenuity and dedication. There lies a problem, however. To select for a cabbage head you need to know that you want a cabbage head. But how can you want something for which there is no analog in the floral world? The logical solution of this paradox is to assume that the selection was for other traits; the cabbage head just came along. So comes the story of miracle-working farmers:

...In the wild, the Brassica oleracea plant is native to the Mediterranean region of Europe, and is somewhat similar in appearance to a leafy canola plant. Soon after the domestication of plants began, people in the Mediterranean region began growing this first ancient "cabbage" plant as a leafy vegetable. Because leaves were the part of the plant which were consumed, it was natural that those plants with the largest leaves would be selectively propagated. This resulted in large and larger-leafed plants slowly being developed as the seed from the largest-leafed plants was favoured. By the 5th century B.C., continued preference for ever-larger leaves had led to the development of the vegetable we now know as kale. As time passed, some people began to express a preference for those plants with a tight cluster of tender young leaves in the centre of the plant at the top of the stem. Because of this preference for plants in which there were a large number of tender leaves closely packed into the terminal bud at the top of the stem, these plants were selected and propagated more frequently. A continued favouritism of these plants for hundreds of successive generations resulted in the gradual formation of a more and more dense cluster of leaves at the top of the plant. Eventually, the cluster of leaves became so large, it tended to dominate the whole plant, and the cabbage "head" we know today was born. This progression is thought to have been complete in the 1st century A.D. http://gardenline.usask.ca/veg/cabbage.html

Alas, there is no evidence suggesting that cabbages have indeed been selected in this way. Tender tea leaves and leaf buds have been selectively picked for 5000 years without producing anything like a cabbage head. Almost certainly the cabbage head is a rare, possibly multigenic, recessive mutation of arrested leaf development.

...To permit growth from their tips, plants must repeatedly form new cells above the growth zone. This production of new cells is critical to replenish the growth zone after a set of cells have elongated and matured. These new cells are formed in regions called apical meristems. Meristems are active regions of mitotic cell division. A head of cabbage can be thought of as the highly enlarged apical meristem of a shoot, the regions not only of new cell production, but also the location where stems make new leaves. The leaves of the cabbage arch over the apical meristem. This curvature of the leaves is typical of what we find at the apical meristem. Cabbage and shoot apical meristems have numerous leaves positioned very close together. The internodes near the apical meristem have not yet elongated. http://www.wsu.edu/~wsherb/edpages/delicious/cabbage.html

The cabbage heads are defects of apical meristem growth most likely due to the failure of a regulatory gene. In cauliflower and broccoli, the dense floral "head" is already known to be the result of a single defective regulatory gene
encoding MADS domain (it is a transcription factor - a protein binding to a specific DNA domain interfering with its transcription). MADS genes are very ancient; we have such genes, too, though we have split from plants eons ago. The defective CAL gene has a nonsense mutation (premature stop codon); its human analog would be some forms of cystic fibrosis. Far from being the result of intergenerational toil of Mediterranian farmers, the cabbage head is the plant analog of a rare genetic disease requiring but a single serendipous find. People discovered the exceedingly uncommon mutant and turned it into a crop, perhaps first chosing these plants for their unusual looks rather than their taste. No amount of toil by Asian peasantry would produce a tea head -- because tea plants do not control their flowering using MADS box genes with a problematic codon, while cabbage does. Hence the vegetable freak show in our gardens.

The uncomfortable thought is that this selection for rare mutants was going both ways. As the farmers were discovering and cultivating cabbages with humongous folded buds, the plants were discovering and selecting human heads with humongous folded brains that made their possessors capable of crop cultivation. Greater intelligence may well be the consequence of developmental failure selected upon by the needs of domesticated plants.

Are we cabbageheads?


Cain's mark

related to Why are we white? discussion.

We all know the story of Abel (Hevel: to give breath) and Cain (Kanah: to acquire). The two sons of the first human who chose the forbidden plant product over the bliss of edenic life went two separate ways: Cain was a tiller of the ground and Abel was a keeper of sheep. Then Cain killed his nomadic brother. His Maker "set a mark upon Cain, lest any finding him should kill him" (Gen 4:15) and cursed him

...Cursed are you from the ground that has opened its mouth to take the blood of your brother from your hand. When you work the ground, it will no longer give its strength to you; a wanderer shall you be throughout the land. (Gen. 4:10–12)

Biblical scholars have long been struggling to understand this verse:

...the syntax is awkward, but in Hebrew, that's exactly what the text says: that Cain will be cursed "from" the ground. The strange phrase can either mean that the ground is the source of Cain's curse, (the one doing the cursing, as it were), or that the effect of the curse is to separate Cain "from" the land.

...the end of the story tells us that Cain settles in the land of Nod, and that he builds a city which he names after his son. At first glance, Cain seems to succeed in subverting his decree of exile. But the place he "settles" in is not really a place; its name is the Land of Nod, a Hebrew term that means "the Land of Wandering". Nachmanides notes, the Torah speaks of Cain's city building in the present tense. Nachmanides suggests that the present tense indicates that Cain never finished the project. He was perpetually "building", starting at one point, then stopping, then starting again, always dreaming the dream, but never able to see the project through to completion. Cain desperately seeks to ground himself -- to make a home for himself, or to build a whole city full of homes. But he is a wanderer. The harder he tries, the more the dream evades him.


There is a long tradition (fully accepted only by the Mormons) to view Cain's mark in racial terms; that Cain's mark was his black skin and his separation from the ground reflects the presumed disinclination to farm on the part of Negroid races. I never quite understood how this interpretation fits into the biblical passage; I think it is nonsense. I have a rather different idea what Cain's mark actually is.

I think that Cain's mark is in our genes, it is our genetic adaptations to subsistence farming. One of these adaptations is the recently discovered gene that allows our skin to go to the ancestral condition of hairy hominids and lose the pigmentation that blocks the sunlight. This is an unusual adaptation because the gene that gives us this ability (East Asians have a different one) has been shown to emerge only 6-12 kya, during the agricultural revolution. The original Europeans had dark skin; this population has been almost entirely displaced or decimated when there was a massive migration of white, farming people from the Middle East (see the discussion in the post). The gene was of great importance to the early farmers because living from the ground severely depletes the diet of vitamin D. Whitening of the skin is the adaptation to the chronic deficiency of this vitamin, which is produced photochemically in the skin. This "white" gene (and more than one gene regulates pigmentation) is shared by 97+% of the Europeans and other descendants of the early farmers from the Fertile Crescent.

Cain's mark is not black skin, it is white skin: the evidence of a symbiotic relation with cereal grasses that steered human evolution since the onset of agriculture. Socialization and civilization can also be viewed as such adaptations, and similar complex adaptations are seen in farming animals (such as leaf-cutting ants). Surviving through the cycles of famine, overpopulation and closeness to animals that require increased immunity, tolerance to gluten and other plant products - these are all genetic adaptations to farming, which are the essence of "whiteness," as "white" people are simply the descendants of the earliest Asian farmers who paid the highest price for their experimentation with low-yield primitive agriculture.

The story of Cain and Abel is the sad story of the mutant, "white" farming folk that systematically depleted and largely exterminated the dark skinned prefarming nomads and hunters-gatherers on their major migration routs. Cain's descendendants are cursed from the ground by the very plants on which they rely for living. Subsistence farming requires collective effort and that leads to overpopulation that in the time of famine drives the farmers to seek new land, hence their endless wandering. Their strength in numbers meant that non-farming folk had little hope of stopping the tide of their white brothers. Europe, Egypt, India, the Russian steppes fell one after another. The descendants of Cain made home in the Land of Nod; the very curse from the ground is the one that drives them from the ground, as their numbers grow and the land can no longer support their population.

To know how Cain's mark looks like, get yourself a mirror.

Why are we white?

Why are we white?

The common answer is that up North there is no selective pressure to keep sun-blocking melanin in the skin whereas the pale skin enhances photogeneration of vitamin D. Whitening of European skin is very recent, just 6-12 kya; http://www.sciencemag.org/cgi/content/summary/316/5823/364a
the Europeans became white only after the transition from hunting to farming. Oddly, the Neandertahls were pale skinned (and had red hair and blue eyes) without any farming.

...Why did Europeans become pink even as Mongols and Inuits at the same or higher latitudes remained brown? Why did Mayas and Incas fail to become as dark brown as Africans or Melanesians of the same latitude? The Jablonski-Chaplin map predicts Native South Americans of Colombia, Venezuela, and coastal Peru to be as dark as equatorial Africans. In fact, they are not much darker than native North Americans. The map predicts the Saami of Lapland, the Inuit people of Greenland and Canada, and the Aleuts of the Bering Sea and northern Siberia to be lighter-skinned than Scandinavians. In fact, they are darker. The map predicts a band of people stretching around the globe at 55 degrees north latitude (the natives of Kazakhstan, Irkutsk, Ulan Bator, northernmost Manchuria, the Aleutians, Juneau, Hudsons Bay, and Labrador) to be as fair as Danes. In fact, they are much darker.

...European agriculture began about 10 kya in the Near East and spread to the Baltic by 5 kya. The advent of agriculture saw a dietary shift from meat to grains. This reduced dietary vitamin D intake among farming peoples and so perhaps lightened their complexions slightly via the paleness adaptation. It was probably not significant outside Europe because domestic cereals do not grow without intensive modern agricultural techniques above about 55 deg of latitude. Higher latitudes are just too cold—the growing season is too short—to let crops compete successfully with herds as food source. Consequently, even post-Neolithic high-latitude peoples continued to have a diet rich in meat (and so, vitamin D). These include the Inuit (seagoing mammals), Aleuts (fish), Saami (reindeer), Mongols (horses), and Native North Americans (bison).

...these adaptations functioned by the loss of genetic coding for dark complexion. The gene pool of the Native Americans who crossed through the Beringia bottleneck and populated the New World no longer had all the needed genes. The genetic variability subsequently available to their descendants simply did not include alleles at the five loci necessary to produce dark brown offspring.

...It is proposed that the pale skin seen in modern Europeans and Asians, are not the results of Darwinian selection; these attributes provide no survival benefits. They are instead the results of sexual selection combined with a third, previously unrecognized, process: parental selection. The use of infanticide as a method of birth control in premodern societies gave parents – in particular, mothers – the power to exert an influence on the course of human evolution by deciding whether to keep or abandon a newborn infant. If such a decision was made before the infant was born, it could be overturned in the positive direction if the infant was particularly beautiful – that is, if the infant conformed to the standards of beauty prescribed by the mother's culture...

...The problem with sexual selection is that it normally results in a trait’s strong sexual dimorphism... It has been suggested that dark complexion reduces the incidence of skin cancer, improves thermoregulation (ability to sweat), or camouflages the hunter. Others say that light skin is less at risk from cold injury. Some speculate that skin tone is merely an unselected by-product of adaptations to disease and parasites.

So here we are, the victims of the white man's burden and the fosterlings of the millenia of inadequate diet and avitaminosis, the paleface aberrations inexplicably loved by the infanticidal mothers for not having the very camouflage which is needed for hunting, the lost souls who are genetically incapable of returning to the black, brown and beige world of our ancestors...

For what sins of our ancestors are we white?