I have too much on my plate for this to be anything beyond a stray thought at the moment. I have been reading a lot on violence and unrest in historical China, on which a more complete post some time soon (I hope). In the meantime, it has me thinking quite a bit about the Occupy movement and the broader unrest of the past few decades.
Two relatively recent works coming out of the Tilly school have raised interesting questions on what we might call the early modern rights transition. Thomas Bouye's Manslaughter, Markets and Moral Economy and Ho-fung Hung's Protest With Chinese Characteristics are both attempts to make sense of the changing motivations for violence and mass protest in the mid-Qing Dynasty (18th century). Bouye splits these disputes into contractual - disputes caused by contract-based events like evictions and defaults - and non-contractual - disputes over things outside what is specified in contracts. Hung divides protests into state-engaging - seeking state assistance like famine relief - and state-resisting - avoiding state actions like tax collection. Both point to a shift, probably starting in the late Ming (late 16th/early 17th century), from a social and governmental regime founded on control over labor to one founded on control over capital (mainly land and silver). In other words, there was a shift in the most important sources of wealth and power, and both the state and society reacted by creating new means of control.
Prior to the shift, control of labor was relatively important. Labor was relatively scarce, and there was not enough liquid capital for capital-intensive techniques, or market-based control regimes. Because of this, gentry exerted their power through customary rights over bonded labor, and the state used corvee - mandated labor - as part of its tax regime. Control of land was important, but if it was not coupled with direct control over labor, it was ineffective.
The 16th and 17th centuries brought about two important shifts. First, population surged, decreasing the marginal value of labor. Second, an influx of silver decreased the frictional costs of market-based control mechanisms. Gentry began to focus more on exerting control of land, knowing that they could obtain labor on the market. The state commuted tax payments in kind (grain, textile) and in labor (corvee) to monetized payments (the so-called "Single Whip Reform"), confident in the transferability of capital. The result was the beginnings of the end of customary rights regimes with a dual focus on land and labor, and the transition to an early capitalist regime, where labor, produce, currency and land were all readily interchangeable - a regime in which control of land and currency began to trump control of land and labor. Increasingly, this control was marked contractually, rather than through customary rights.
This transition was slowed somewhat by the fall of the dynasty. Warfare, famine and epidemics reduced the population, increasing the marginal value of labor. Wartime markets were less efficient, and unrest made contracts hard to enforce. But once the Qing Dynasty was stabilized, it continued. The mid-Qing was marked by unrest and violence, demarcating the still-ragged edge of the contractual, capitalist rights regime. Initially, most unrest was directed toward resisting the new rights regime - advocating for "traditional" customary rights, resisting the new taxes, laws and contracts. Eventually, there was a transition to where unrest was largely directed toward hashing out rights within the new regime. Instead of resisting the idea of contractual labor, tenants advocated for more or better contractual rights. Instead of resisting state programs, people protests specifics of how they were implemented. Disputes were increasingly contained within the new rights regime rather than resisting its imposition and advocating for the more traditional formulation of rights.
It appears to me that we have gone through several transitions since then, from the industrial revolution's focus on light and then heavy industry, to the further capitalization of agriculture during the green revolution, to the shift from goods to services. And we are going through a similar transition now. With the rise of intellectual property and digitization, rights to traditional capital forms are changing. Capital markets have become hyper-fluid, and focus largely on profiting from the borders of outdated regulatory regimes that are now outpaced by the rate of transactions. Sectors like arts and entertainment, information technology, finance, education and services now dominate traditional productive sectors in durable and non-durable goods, let alone even more "traditional" sectors like agriculture and mining (see the chart at xkcd). Even in the "traditional" productive sectors, intellectual property has become more and more important. Agribusiness is increasingly about patenting drugs, genes and even species - bringing this most traditional of sectors into the information rights regime. Even all of these sectors put together are completely dominated by the market in derivatives. Rights and transactions in nearly pure information are replacing all the old forms of capital - land, labor, goods, services...even currency.
Through all of these innovations, most people and even the government is being left behind. In the Ming and Qing, tenants were lost until they learned to assert their rights within the new regime. The strength of the Qing at its height was in part through its embrace of the land-capital power base - taxes were denominated primarily in land, and collected in silver. It fell in part because it failed to transition to a power base in industrial capital, with the appropriate social and governmental controls.
The current unrest, at the popular level, is because individuals have not yet figured out how to assert their rights under the new regime. The way IP works now privileges giant corporations with massive legal teams. The negative outcomes of this system are felt from the web, to pharmaceuticals to agriculture and consumables, where bigger corporate entities have the advantage. Open source, creative commons and such concepts seem to be early
attempts for consumers and more diffuse production groups to exert their
rights under the intellectual property system. To this point, these attempts to reform within the system are most effective in the more abstract forms of information - IT and software. Because consumers are in less direct contact with producers of other goods - drugs, food, etc - it has been harder for them to recognize the links in those systems. So far, reform attempts have been largely confined to the outdated rights regime or to resistance to the new system.
Many governments are also failing, to varying degrees, because they have not adjusted their tax systems or their forms of sovereignty to account for these changes in rights regimes. Tax codes are still focused on income and corporate taxes. These made sense during the industrial period, when these were the most important factors producing profits. This is not how profits are produced today. Most people are more important as consumers than producers. Financial transactions are inadequately taxed, as are profits obtained through control of intellectual property. Media, the direct control of information, has become even more important for elections, and a combination of finance and media corporations are able to dominate these processes. Sovereignty is still theoretically defined around the individual, but it functions around the unit of information. Government ends up underfunded for critical programs, and elections are travesties of financial escalation and media domination.
Again, with the exception of a few key sectors, individuals have been unsuccessful in asserting their economic or political rights as the rights regime changes. I would argue that this is largely because resistance to the excesses of the new system are stuck in modes of resistance based in an outdated system. It is tempting to demand a return of old rights and responsibilities, but history shows that this mode of engagement will ultimately fail. If the 99% is to be recognized in the 21st century, we must advocate for ourselves within the new rights regime. This means we must continue to fight for open-source, copy-left protections - of both "traditional" goods and methods, and of new ones that are developed though a more diffuse community. We must develop awareness linking producers to consumers of goods whose chain of production is more obscure. We must convince our governments to tax the most profitable sectors of the economy, both to ensure the government remains solvent, and to put more controls on these sectors. Politically, the new rights regime strikes me as a bigger challenge. How can the diffuse masses assert control of political information against better-organized, centralized industry lobbing? How can individuals continue to assert sovereignty in a context where control of liquid capital and media is more important than social organization. For now, I will leave this question to others, because I am out of ideas.
But in short, we must recognize that individual voters, consumers and small-scale diffuse producers are being left behind by centralizing, aggrandizing control of information-capital. To resist this and reclaim power for ourselves, we must meet the new rights regime on its own terms.
Wednesday, December 7
Tuesday, November 22
Fixed Ontologies and Authority Creep
I've been having a recurring conversation recently about what humanists can offer to the world at large. How can we get out of the ivory tower and offer something that goes beyond a community of specialists. Some academics complain that policy-makers etc don't listen to the humanist perspective. In large part, this is due to the way we cloister ourselves into disciplinary echo-chambers, writing papers that are incomprehensible without years of specialist training. Ironically, moves to interdisciplinary and digital collaboration have only hardened the divisions between areas of specialty (at least according to comments by Katy Borner at IPAM last summer) - it's now easier than ever to collaborate with people who do exactly the same things you do. In any case, my answer has been that humanists need to actively engage with the outside world. Not to wait for them to come to us; to go to them.
What do advanced scholars of the humanities have to offer? We understand media, interpretation, and realms of meaning better than anyone else. We put in the time to learn languages (often dead ones), theories (often obscure ones) and develop a strong sense of context. Advanced scholarship in the sciences and technology involves intensive training in a particular paradigm. The power of this is the ability to think in depth and in detail within a particular ontology. The weakness is it builds in a type of blindness to the limitations of this particular disciplinary brand of knowledge.
By example - the medical profession. Doctors are highly trained, regulated and institutionalized. As a result, they are very good at certain things. However, there is a sort of mission creep - or perhaps "authority creep" - that comes with this level of expertise. Biomedical doctors are very good at treating acute injuries and many kinds of (bacterial) infections; they are generally not good at treating pain and other (viral) infections. But because viral infections look a lot like bacterial infections and pain often accompanies injury, these are also assigned to doctors' area of expertise. Frequently, for no particularly good reason, products will instruct you to "consult your doctor before using." This itself is the result of another profession's "authority creep" - lawyers.
But the big problem is not that doctors are bad at treating things like pain, obesity and the common cold. The problem is that most doctors do not recognize or admit that they are bad at them. This is a problem of operating within a fixed ontology without seeing its limits. Doctors are trained to look for a chain of causal factors that can be identified either in the laboratory or through double-blind experiments. This is very good at revealing certain types of causalities. We manifest symptoms of a cold when we are infected by certain viruses. At some level, the cold is "caused" by the virus. Therefore, the best solution is assumed to be preventing or curing viral infections. At some level, this understanding is useful. Things like hand-washing and other means of preventing exposure help us avoid infection by understanding the vector of transmission. But the overwhelming focus on the proximate vector of infection keeps us from recognizing other causal factors - stress, weather change, and so on - and from taking preventative or curative action at this level.
Obesity is an even stronger example of medical authority creep that distracts from more meaningful causes, and perhaps more meaningful solutions. Doctors deal with obesity because it is itself a physical ailment, and it leads to many more serious physical ailments - cardiovascular disease, diabetes, even cancer - that are more clearly within the medical realm (although these are generally things that bio-medicine treats decently - not badly, but not particularly well). The thing is, even as medical knowledge advances, people keep getting fatter.
This is because of the mechanistic view of the body that is omnipresent in bio-medicine. When you break it into individual mechanisms, it is clear why we get fat. We get fat because we eat food that is not used for energy or released as waste. Doctors have ever more fine-grained understanding of the mechanisms by which we put on weight. Ultimately, it reduces to some pretty simple points - if you eat too much - particularly of certain things like sugar (and maybe fat), and you don't exercise enough, you get fat. Thousands of diets have been based around this fundamental realization - instructing people to eat less of certain things, or less at certain times, or just less in general. And these diets have failure rates of essentially 100%. The proximate causes of obesity do not reveal the right way to treat it.
The thing is, obesity has other causal mechanisms - ones that are not easily incorporated into the biomedical ontology. Obesity has a strong spatial dimension:
and a strong social dimension. Some people have suggested these factors should be taken into account in fighting obesity, but most of the medical community remains focused on changing diet, exercise, or using drugs to target certain metabolic pathways. These approaches fail, and are fundamentally wrongheaded.
The specifics of this are somewhat of a side point (albeit an important one). It actually looks like things like zoning laws may be more important in fighting obesity than diet-and-exercise plans. This does not sit well with most doctors, because there is little room in their ontology of disease for spatial and social factors. This is one of the revelations that humanists and social scientists have to offer the world - a recognition of the limits of usefulness of disciplinary paradigms. Americans are getting shorter, something that is generally a good measure of general falling health and standards of living. This happened before, at other times when we were ostensibly progressing (such as the industrial revolution). If "better" understandings of obesity still lead to more of it, that seems like an occasion to reexamine what exactly "better" understanding means. And examining the boundaries of understanding is a job for humanists.
What do advanced scholars of the humanities have to offer? We understand media, interpretation, and realms of meaning better than anyone else. We put in the time to learn languages (often dead ones), theories (often obscure ones) and develop a strong sense of context. Advanced scholarship in the sciences and technology involves intensive training in a particular paradigm. The power of this is the ability to think in depth and in detail within a particular ontology. The weakness is it builds in a type of blindness to the limitations of this particular disciplinary brand of knowledge.
By example - the medical profession. Doctors are highly trained, regulated and institutionalized. As a result, they are very good at certain things. However, there is a sort of mission creep - or perhaps "authority creep" - that comes with this level of expertise. Biomedical doctors are very good at treating acute injuries and many kinds of (bacterial) infections; they are generally not good at treating pain and other (viral) infections. But because viral infections look a lot like bacterial infections and pain often accompanies injury, these are also assigned to doctors' area of expertise. Frequently, for no particularly good reason, products will instruct you to "consult your doctor before using." This itself is the result of another profession's "authority creep" - lawyers.
But the big problem is not that doctors are bad at treating things like pain, obesity and the common cold. The problem is that most doctors do not recognize or admit that they are bad at them. This is a problem of operating within a fixed ontology without seeing its limits. Doctors are trained to look for a chain of causal factors that can be identified either in the laboratory or through double-blind experiments. This is very good at revealing certain types of causalities. We manifest symptoms of a cold when we are infected by certain viruses. At some level, the cold is "caused" by the virus. Therefore, the best solution is assumed to be preventing or curing viral infections. At some level, this understanding is useful. Things like hand-washing and other means of preventing exposure help us avoid infection by understanding the vector of transmission. But the overwhelming focus on the proximate vector of infection keeps us from recognizing other causal factors - stress, weather change, and so on - and from taking preventative or curative action at this level.
Obesity is an even stronger example of medical authority creep that distracts from more meaningful causes, and perhaps more meaningful solutions. Doctors deal with obesity because it is itself a physical ailment, and it leads to many more serious physical ailments - cardiovascular disease, diabetes, even cancer - that are more clearly within the medical realm (although these are generally things that bio-medicine treats decently - not badly, but not particularly well). The thing is, even as medical knowledge advances, people keep getting fatter.
This is because of the mechanistic view of the body that is omnipresent in bio-medicine. When you break it into individual mechanisms, it is clear why we get fat. We get fat because we eat food that is not used for energy or released as waste. Doctors have ever more fine-grained understanding of the mechanisms by which we put on weight. Ultimately, it reduces to some pretty simple points - if you eat too much - particularly of certain things like sugar (and maybe fat), and you don't exercise enough, you get fat. Thousands of diets have been based around this fundamental realization - instructing people to eat less of certain things, or less at certain times, or just less in general. And these diets have failure rates of essentially 100%. The proximate causes of obesity do not reveal the right way to treat it.
The thing is, obesity has other causal mechanisms - ones that are not easily incorporated into the biomedical ontology. Obesity has a strong spatial dimension:
and a strong social dimension. Some people have suggested these factors should be taken into account in fighting obesity, but most of the medical community remains focused on changing diet, exercise, or using drugs to target certain metabolic pathways. These approaches fail, and are fundamentally wrongheaded.
The specifics of this are somewhat of a side point (albeit an important one). It actually looks like things like zoning laws may be more important in fighting obesity than diet-and-exercise plans. This does not sit well with most doctors, because there is little room in their ontology of disease for spatial and social factors. This is one of the revelations that humanists and social scientists have to offer the world - a recognition of the limits of usefulness of disciplinary paradigms. Americans are getting shorter, something that is generally a good measure of general falling health and standards of living. This happened before, at other times when we were ostensibly progressing (such as the industrial revolution). If "better" understandings of obesity still lead to more of it, that seems like an occasion to reexamine what exactly "better" understanding means. And examining the boundaries of understanding is a job for humanists.
Labels:
geospatial analysis,
medicine,
random thoughts
Monday, November 21
SSHA 2011
I presented at SSHA 2011 this weekend, where I was on a panel with Daniel Little, among others. Lots of interesting stuff on stature for reconstructing living standards in the past, social networks, etc. Having posted the introduction to my paper, I thought it worth presenting a bit more here.
Here are some highlights from my slides from SSHA 2011. I argue that digital humanities, centered thus far on search and tagging capabilities, has not made a qualitative shift in the type of historical research being done.
Unsupervised topic modeling, because it does not reproduce the researcher's bias for "interesting" topics (major events, famous people) or for topics that are easily addressed by existing historiography has much potential for investigating the "great unread" - the large portion of the corpus that is essentially unstudied.
As an example, I show that "bandits" (zei 贼) are hard to pick out of Chinese texts based on keyword searching. Single-character keywords are too general (everything from embezzling ministers to thieves to rebels),
and multi-character keywords are too specific (they miss more than 70% of the occurences of 贼).
To approach this type of diffuse and linguistically amorphous phenomenon, topic modeling works better. My 60-topic LDA pulled out 5 topics related to bandits, that nicely divide among the different phenomena associated with the keyword - corruption, law and order, trials, rebels, and counter-rebellion.
These topics not only allow for better searching of the corpus, but also show different temporal characteristics. Law and order Topics are highly seasonal (peaking in the early fall) while rebellion is not.
Rebellion is also highly driven by specific major events (for example, the Taiping Rebellion), while the law and order Topics are more endemic at a low level (with some cyclical and apparently random variation as well).
Another surprising result was the ability of the topic model to find phonetic characters commonly used for names. In Chinese, there are some characters that are almost exclusively used in muli-character compounds for spelling foreign names (e.g. 尔克阿巴图布啦勒萨etc.); the association of these characters was discovered in the corpus and assigned to two topics. The program also found common Chinese surnames (张陈李吴王刘朱周etc.) and assigned them to a topic, and found other characters that occur largely in given names or names of temples and the like (文玉麟祥福荣, etc.) and assigned them to a second. Without incorporating the idea of names or multi-character words into the model, it was nonetheless able to find many instances of these and group them together. If we graph a composite of the Han names versus a composite of the non-Han names, several things jump out: the prevalence of non-Han names during Qianlong's western campaigns, and a secular decline in non-Han and increase in Han names (or perhaps, an event-driven switch to the importance of Han officials at the time of the Taiping Rebellion, a concept well-supported in the historiography).
Here are some highlights from my slides from SSHA 2011. I argue that digital humanities, centered thus far on search and tagging capabilities, has not made a qualitative shift in the type of historical research being done.
Unsupervised topic modeling, because it does not reproduce the researcher's bias for "interesting" topics (major events, famous people) or for topics that are easily addressed by existing historiography has much potential for investigating the "great unread" - the large portion of the corpus that is essentially unstudied.
As an example, I show that "bandits" (zei 贼) are hard to pick out of Chinese texts based on keyword searching. Single-character keywords are too general (everything from embezzling ministers to thieves to rebels),
and multi-character keywords are too specific (they miss more than 70% of the occurences of 贼).
To approach this type of diffuse and linguistically amorphous phenomenon, topic modeling works better. My 60-topic LDA pulled out 5 topics related to bandits, that nicely divide among the different phenomena associated with the keyword - corruption, law and order, trials, rebels, and counter-rebellion.
These topics not only allow for better searching of the corpus, but also show different temporal characteristics. Law and order Topics are highly seasonal (peaking in the early fall) while rebellion is not.
Rebellion is also highly driven by specific major events (for example, the Taiping Rebellion), while the law and order Topics are more endemic at a low level (with some cyclical and apparently random variation as well).
I also presented several other Topics that don't relate to banditry. Ritual allows us to easily pick out the reign periods - all three ritual topics spike when there is a new emperor. We can also clearly see a drop-off in the Imperial Family topic when Qianlong's mother dies:
Any comments would be much appreciated, as I'm trying to turn this into a more formal article as we speak.
Monday, November 14
Unsupervised Learning and Macro-history
It has been difficult to maintain a blog with interesting posts separate from the main thread of my research. The everyday work of grading, reading and writing academic work has placed severe limits on my time and brain-space, so I have let this blog fall to the wayside for quite some time. I have recently decided that I will use this as a space to record and open some of my academic work to outside thought and criticism. A later post will address the reasoning behind this decision in somewhat more depth; for the time being suffice to say that this is a way to spur me to keep writing. Below is the first of these new posts.
I will be presenting a paper at the upcoming Social Science History Association meeting in Boston. This paper is a first attempt at using topic models (specifically a model called the Latent Dirichlet Allocation) to analyze large-scale historical and textual trends. Specifically, I will be analyzing a partial version of the Qing Veritable Records (Qing shilu 清實錄) starting with the Yongzheng reign through the end of the dynasty. The following is a draft of my introduction:
The promise and failure of digital humanities
The past ten years have witnessed the meteoric rise of the so-called digital humanities. This has become a buzzword in academic circles, the topic of much contemplation among some, and largely ignored as a passing fad by others. Perhaps this is due in part to confusion over what exactly this term is supposed to mean. As far as I am concerned, the digital humanities are the result of two related trends - the large-scale digitization of textual corpora, and the realization that internet technologies could be adapted to research on these corpora. The internet is, after all, a particularly large and complexly organized body of text.
As such, much of the early work in humanities computing has centered on tasks that existing internet technologies solve quite well - searching and tagging. In essentially every discipline and sub-discipline in the academic world, there now exist large textual databases with increasingly powerful search capabilities. At the same time, several large consortia (c.f. TEI, etc.) have worked to develop humanities-specific standards for tagging textual and meta-textual data. As with the internet, the more tagged and interlinked a text, the more responsive it becomes to search technology.
These are both positive trends that have made certain types of research much easier. For example, it was possible for me to write a masters thesis on yoghurt in historical China in less than a year. Previously, given the rather obscure nature of this topic, it would have been necessary to read massive amounts of text looking for occasional reference to dairy; with search technology, it was simply a matter of finding a few keywords and reading the limited sections of these texts that actually referred to the topic of interest. In short, research has become faster and more directed thanks to the reduced overhead of finding reference to topics of interest.
Nonetheless, these technologies have not qualitatively changed the type of research that gets done. Before search capabilities, researchers focused primarily on topics that were of general interest, and frequently on phenomena that had already generated large amounts of scholarship. For very early history, we have limited numbers of extant texts, and these texts have generally been studied comprehensively. But for later periods of history, the tendency to focus on the "interesting" and the "already studied" has left a large body of texts that remain essentially unread - the "great unread":
With the advent of search technology, it has become even easier to focus on the particular topics of interest, as opposed to those dictated by the coverage of the corpus - easier to find topics like "yoghurt" that are not a central concern in most historical Chinese texts. At the same time, the rise of computerized tagging has made well-researched areas even more amenable to further research. Arguably, all that the digital humanities have done thus far is to turn the "great unread" into "the great untagged." In fact, it may have narrowed the focus of research as younger generations become less willing and less able to work on texts that have not been digitized:
Although to a large degree, the most interesting, and most heavily researched corpora are the first ones to be digitized, so this does not really change much.
Macro-history has historically been constrained to a slightly different subset of the received corpora. To deal with very large areas or broad sweeps of time, macro-historians have generally been limited to a few approaches:
It is worth noting that the statistical orientation of macro-historical work has made it more amenable to computerization from a much earlier date. Arguably, branches of social and economic history have been in the "digital humanities" since the 1970s or earlier.
Network analysis has been another growth area in social history for the past decade. Social network analysis also had roots in sociology in the 1970s, but it was not until the growth of the internet that physicists and computer scientists got interested in the field and its computational capabilities began to take off. The resurgence in network theoretical approaches to historical research has been one area in which the digital turn has led to a recognizable, qualitative shift in the type of research being done. Nonetheless, it has also remained constrained to that familiar subset of textual data - well-ordered, well-researched sources relating primarily to "people of note."
While there are exceptions to these general observations, it is clear that digitization of historical research has not resulted in major shifts in the type of research being done, rather in the extent of that research. It is a truism that as long as research questions are researcher-driven, the areas of analysis will largely be constrained to areas of obvious interest to those researchers at the outset of their projects - notable individuals, events, and topics of longstanding interest to the historiography. There is, however, a sizable body of text into which this research never ventures. What is more, increasing amounts of this text has been digitized, opening it up to less research-intensive methods of exploration. The question that remains is how to take research into this largely uncharted textual realm.
Unsupervised learning and the "great unread"
To me, the answer is to take the direction of the research at least partially out of the hands of the researcher. Another branch of digital technology that has developed in recent years has focused on a different type of machine learning. Search technology is driven in part by what is called supervised learning; the better search algorithms improve their results by taking into account human responses to their earlier searches. But increasingly fields like computational linguistics have delved into unsupervised learning, building algorithms to analyze text based on the text itself, without human intervention at the analytical stage. These algorithms reveal aspects of the underlying structure of texts that are not as explicitly constrained by researchers' conceptions of what the texts should look like. These techniques should allow us to delve into that great unrehearsed realms of digitized corpora:
I hope to show that these unsupervised statistical techniques allow us to look at some of the more diffuse phenomena in history. We already have well-developed methods to look into the lives of famous people, important events and the easily-quantifiable aspects of everyday life (demographics, economics). But aside from the somewhat quixotic attempts of Braudel and his ilk, few have looked into the everyday constants of the unquantifiable past. By undergoing a computerized reading of a large corpus (the Veritiable Records of the Qing Dynasty), this project is able to quantify textual phenomena in a way that is not based on researcher assumptions. In doing so, I am able to look at the progress of five types of temporal change:
In this method, rather than describing a priori the important categories of analysis, I assume only that there exist such categories. I then use a topic model to determine these categories based on the organization of the corpus itself. A topic model is a statistical model of how documents are constructed from a set of words and topics. At root, it is no different than the implicit model that researchers have about their texts - we assume that the authors of these texts were writing about certain topics, and chose to do so in a certain way. The substitution of a topic model simply implies a unified application of statistical assumptions across the entire corpus, rather than the selective application of unquantified assumptions across a subset of the texts.
The specifics of this topic model (Latent Dirichlet Allocation) and the software implementation (MALLET) that I used will not be described here. The model can be summarized as follows:
I will be presenting a paper at the upcoming Social Science History Association meeting in Boston. This paper is a first attempt at using topic models (specifically a model called the Latent Dirichlet Allocation) to analyze large-scale historical and textual trends. Specifically, I will be analyzing a partial version of the Qing Veritable Records (Qing shilu 清實錄) starting with the Yongzheng reign through the end of the dynasty. The following is a draft of my introduction:
The promise and failure of digital humanities
The past ten years have witnessed the meteoric rise of the so-called digital humanities. This has become a buzzword in academic circles, the topic of much contemplation among some, and largely ignored as a passing fad by others. Perhaps this is due in part to confusion over what exactly this term is supposed to mean. As far as I am concerned, the digital humanities are the result of two related trends - the large-scale digitization of textual corpora, and the realization that internet technologies could be adapted to research on these corpora. The internet is, after all, a particularly large and complexly organized body of text.
As such, much of the early work in humanities computing has centered on tasks that existing internet technologies solve quite well - searching and tagging. In essentially every discipline and sub-discipline in the academic world, there now exist large textual databases with increasingly powerful search capabilities. At the same time, several large consortia (c.f. TEI, etc.) have worked to develop humanities-specific standards for tagging textual and meta-textual data. As with the internet, the more tagged and interlinked a text, the more responsive it becomes to search technology.
These are both positive trends that have made certain types of research much easier. For example, it was possible for me to write a masters thesis on yoghurt in historical China in less than a year. Previously, given the rather obscure nature of this topic, it would have been necessary to read massive amounts of text looking for occasional reference to dairy; with search technology, it was simply a matter of finding a few keywords and reading the limited sections of these texts that actually referred to the topic of interest. In short, research has become faster and more directed thanks to the reduced overhead of finding reference to topics of interest.
Nonetheless, these technologies have not qualitatively changed the type of research that gets done. Before search capabilities, researchers focused primarily on topics that were of general interest, and frequently on phenomena that had already generated large amounts of scholarship. For very early history, we have limited numbers of extant texts, and these texts have generally been studied comprehensively. But for later periods of history, the tendency to focus on the "interesting" and the "already studied" has left a large body of texts that remain essentially unread - the "great unread":
With the advent of search technology, it has become even easier to focus on the particular topics of interest, as opposed to those dictated by the coverage of the corpus - easier to find topics like "yoghurt" that are not a central concern in most historical Chinese texts. At the same time, the rise of computerized tagging has made well-researched areas even more amenable to further research. Arguably, all that the digital humanities have done thus far is to turn the "great unread" into "the great untagged." In fact, it may have narrowed the focus of research as younger generations become less willing and less able to work on texts that have not been digitized:
Although to a large degree, the most interesting, and most heavily researched corpora are the first ones to be digitized, so this does not really change much.
Macro-history has historically been constrained to a slightly different subset of the received corpora. To deal with very large areas or broad sweeps of time, macro-historians have generally been limited to a few approaches:
- Reliance on curatorial and bibliographic work and secondary scholarship to cover some of the vast realm of primary sources.
- Focus on data that is clearly numerical in nature, allowing the comparatively easy application of statistical technique.
- Application of social science theory and categories of analysis to provide a framework for organizing and simplifying the vast array of information.
It is worth noting that the statistical orientation of macro-historical work has made it more amenable to computerization from a much earlier date. Arguably, branches of social and economic history have been in the "digital humanities" since the 1970s or earlier.
Network analysis has been another growth area in social history for the past decade. Social network analysis also had roots in sociology in the 1970s, but it was not until the growth of the internet that physicists and computer scientists got interested in the field and its computational capabilities began to take off. The resurgence in network theoretical approaches to historical research has been one area in which the digital turn has led to a recognizable, qualitative shift in the type of research being done. Nonetheless, it has also remained constrained to that familiar subset of textual data - well-ordered, well-researched sources relating primarily to "people of note."
While there are exceptions to these general observations, it is clear that digitization of historical research has not resulted in major shifts in the type of research being done, rather in the extent of that research. It is a truism that as long as research questions are researcher-driven, the areas of analysis will largely be constrained to areas of obvious interest to those researchers at the outset of their projects - notable individuals, events, and topics of longstanding interest to the historiography. There is, however, a sizable body of text into which this research never ventures. What is more, increasing amounts of this text has been digitized, opening it up to less research-intensive methods of exploration. The question that remains is how to take research into this largely uncharted textual realm.
Unsupervised learning and the "great unread"
To me, the answer is to take the direction of the research at least partially out of the hands of the researcher. Another branch of digital technology that has developed in recent years has focused on a different type of machine learning. Search technology is driven in part by what is called supervised learning; the better search algorithms improve their results by taking into account human responses to their earlier searches. But increasingly fields like computational linguistics have delved into unsupervised learning, building algorithms to analyze text based on the text itself, without human intervention at the analytical stage. These algorithms reveal aspects of the underlying structure of texts that are not as explicitly constrained by researchers' conceptions of what the texts should look like. These techniques should allow us to delve into that great unrehearsed realms of digitized corpora:
I hope to show that these unsupervised statistical techniques allow us to look at some of the more diffuse phenomena in history. We already have well-developed methods to look into the lives of famous people, important events and the easily-quantifiable aspects of everyday life (demographics, economics). But aside from the somewhat quixotic attempts of Braudel and his ilk, few have looked into the everyday constants of the unquantifiable past. By undergoing a computerized reading of a large corpus (the Veritiable Records of the Qing Dynasty), this project is able to quantify textual phenomena in a way that is not based on researcher assumptions. In doing so, I am able to look at the progress of five types of temporal change:
- Event-based change - the major events and turning points of history
- Generational change - in this case, the impact of individual rulers on their historical context
- Cyclical change - the repetition of phenomena on a circular basis, in particular the yearly cycle of seasons.
- Secular change - gradual, monotonic changes over the course of a long period
- Historical constants - phenomena that were a constant part of life in the past, but may or may not be different from the realities of modern life.
In this method, rather than describing a priori the important categories of analysis, I assume only that there exist such categories. I then use a topic model to determine these categories based on the organization of the corpus itself. A topic model is a statistical model of how documents are constructed from a set of words and topics. At root, it is no different than the implicit model that researchers have about their texts - we assume that the authors of these texts were writing about certain topics, and chose to do so in a certain way. The substitution of a topic model simply implies a unified application of statistical assumptions across the entire corpus, rather than the selective application of unquantified assumptions across a subset of the texts.
The specifics of this topic model (Latent Dirichlet Allocation) and the software implementation (MALLET) that I used will not be described here. The model can be summarized as follows:
- "Topics" are considered to be a distribution accross all Words in the Corpus, but are generally described by giving a list of the most common words in that topic.
- "Documents" are considered to be a distribution accross all Topics in the corpus, but are generally described by giving the proportions of the two or three most common Topics.
- The generative model assumes that a Document is created by "choosing" Topics in a set proportion, and then "choosing" words from those topics randomly, according to the Topic proportions in the Document and the Word proportions in the Topic.
- The topic model is reverse engineered statistically based on the number of topics, and the observed occurrence of Words within Documents within the Corpus. The research only specifies 4 inputs to the model:
- Number of Topics
- Proportion of Topics in the Corpus (but not in individual documents)
- The Corpus itself, organized into Documents
- Word delimitation
- This model was run with 40, 60 and 80 Topics. 60 Topics gave the best combination of specificity (fine-grained Topics) and accuracy (fewest "garbage" Topics and "garbage" Words) and was used in this analysis.
- All Topics were assumed to occur in equal proportions in the Corpus.
- The Corpus used is the Veritable Records for last eight reign periods of the Qing Dynasty, covering a period from 1723 to 1911. It is a posthumously-assembled collection of court records for the reign of each emperor of the Qing Dynasty. It is generally well-ordered into Documents (consisting of messages to and from the emperor and records of court events) organized into chronological order with "date stamps" (i.e. the date is specified for the first Document of each day).
- Words are considered to consist of single Chinese characters.
- Word order is not considered within documents (this is a "bag of words" model). Obviously word order does matter in the actual writing of documents, but this model assumes that it is not important in determining the topics used in a document - in other words, it assumes that grammar may contribute to meaning, but not to topicality.
- Documents are analyzed synchronically by the topic model (i.e. time is not integrated into the statistical model). Diachronic analysis is reimposed on the Documents after Topics have been generated. Again, the topics were generated over a long period of time and topic usage may have changed over time. In fact, topic drift is revealed by ex post facto analysis of the results, but it is not integrated into the model.
- Document length is not considered. This is potentially problematic for the statistical conclusions reached on particularly short topics. These topics were primarily records of court events and tend to cluster in certain topics. A solution will be addressed in future research.
- Regional variation is not incorporated into the topic model. Certain geographic Topics are identified by the model, but are not incorporated into my analysis of the results.
- Section headers were not removed from the corpus before ananlysis. These are also identified by three topics that only occurred in section headers, and that occur with high proportion in these headers.
- Written Chinese in the late imperial period did not actually consist of single-character words, single and multiple-character words were both common. Single characters are analyzed because there are no good word delimiting algorithms for classical (or modern) Chinese. As with document length and and section headers, certain types of word formations are actually reverse engineered by the topic analysis, despite the fact that they are not incorporated into the model.
Monday, September 19
Tangentially related thoughs - cyborg simulations
This is only slightly related to the discussion on nomadic academics of the past two posts. It is related in the sense that it is about alternative ways of relating to data and data-generation. There has been a decent amount of work on virtual economies and what they can tell us about RL economies. More recently, there has been some breakthrough in AIDS research (hat tip @javiercha) coming out of a tinker-toy type game. What these situations have in common is the combination of computer simulation with human interaction. The simulation is not complete without BOTH elements. This strikes me as an especially powerful way of doing certain types of research, something that goes beyond crowdsourcing, beyond complex systems models (like Conway's Game of Life). This is also different than gamification (which is bullshit, btw), or Jane McGonigal's games to change the world stuff. This is games to study the world.
This is cyborg simulation (half-human, half-computer). The promise, to me, is to combine the things that computers do well (crunching numbers, remembering things, setting limits) and the things that humans do well (some types of pattern recognition, teamwork, "creativity"). Wow. Think about that!
This is cyborg simulation (half-human, half-computer). The promise, to me, is to combine the things that computers do well (crunching numbers, remembering things, setting limits) and the things that humans do well (some types of pattern recognition, teamwork, "creativity"). Wow. Think about that!
Friday, September 16
The Nomadic Academic, Part 2
I've gotten some good responses on my first post, and a nice link from my old classmate Graham Webster on his blog infopolitics. He poses a very important question, one that I hope to address in more detail in a later post. Much as I may use food and energy as extended metaphors for knowledge or information, the nomadic academic does not live on books alone. How then, Graham asks, are academics to buy real food and real gas while they are off in the metaphoric steppe herding diffuse information? His answer, in short, is to combine academic and non-academic (a.k.a. "real world") employment. You should go read it in full. But how much useful work will this produce? And can it compete with the work produced by those fully in the academe?
As I said, I hope to tackle these questions at more length in a later post. But I have spent the week thinking about something else, so first this:
Part 2: The Secondary Products Revolution
In the previous post, I qualified nomadism as a strategy of mobility. Real-world nomadism as an economic-ecological adaptation required the horse and certain horse technologies (bit, stirrups) to allow people to cover more ground and control more animals than they could on foot. By analogy, certain computer technologies (full-text searching, translation software) allow the pioneer generations of nomadic academics to cover more ground their their precursors. It used to be the work of a career to read a major corpus (say, six or eight Dynastic Histories) looking for specific topics or terms (yogurt, for example). Nowadays, it is possible for a Young Turk who barely reads Chinese to write a masters thesis on yoghurt covering Sima Qian to the Northern Wei and the Southern Song (that's 14 centuries, for those keeping track at home). I know this because I wrote such a paper.
Horseback riding is only one of several critical technologies on the way to nomadism. Superior coverage of diffuse information is a new productive paradigm, for sure. It enables quicker and easier access to a broader base of information that we would otherwise have. However, the end product is the same. To extend the metaphor, the old way to write a book with broad coverage was to expend a huge number of person-hours gathering in all your sheep to sell them at market. The new alternative is to ride a horse (i.e. search engine) for fewer people to expend fewer hours bringing those same sheep to market. This still does not allow the academic nomad to range that far.
So what additional technologies are needed? In the neolithic, there was a critical second step in the domestication of animals, one of major importance to the development of nomadic pastoralism. This was the so-called Secondary Products Revolution, an idea due to Andrew Sherratt on the importance of non-meat animal products like milk, wool and labor. The importance of this to pastoralists is the ability to glean small profits from their animals during the long and expensive process of raising them to maturity. Eventually, many animals came to be raised primarily for their secondary products.
The importance of this to nomads should be obvious. Nomads use animals to gather the sparse produce of marginal zones. If they were to eat entirely from the meat of their animals or by exchanging animals for other food and goods, pure pastoralists would need truly enormous numbers of animals to survive. These animals in turn would require even more enormous pastures. The feasibility of raising large numbers of animals exclusively for meat by specialists (i.e. people who didn't also farm) is almost certainly limited to the modern CAFO, which, by the way is a density-based (not mobility-based) strategy.
Secondary products, however, allow nomads to survive off their herds for long periods of time without reducing their numbers by primarily consuming their milk and blood. In most modern nomadic societies milk and blood products - combined with grain foods obtained by trade - are their staples. Meat is a special-occasion food. This was almost certainly the case historically as well.
Essentially, secondary-products give a way of storing animal-based food energy. Meat cannot be stored without salting or refrigeration. Grain can easily be stored, but not easily transported. Storing meat on the hoof is not a great option because you have to keep feeding it. But once you can get milk from that hoofed-meat while it is in storage, it becomes a desirable option. More importantly, these milk-producing meat-storage units can move around. They're called cows.
In other words the secondary products revolution gave herders an energy storage option that was, in many ways, superior to grain. It was still impossible to store large numbers of livestock in one place (because they would eat all the grass), so grain remained the preferred "density" strategy. But it made specialized herding a plausible "mobility" strategy.
So how does this metaphor extend to academia? I would argue that standard research is a lot like growing grain - you sit in one place and plant a bunch of seeds (read other people's books), water them (attend seminar) and then you harvest them (write your own book). Other people can then take your seeds (book) and plant it in their field to grow some books of their own. Search-based research methods are more like hunting or herding - rather than working a known patch of soil, you cast a wider net (ok, it's also kinda like fishing) and pull in sparse resources from a large area. Here is where the metaphor breaks down however, because in our transitional generation we have been writing books. That's kinda like turning our meat (or fish) into grain. Or something.
The metaphor has broken, but I hope the point is made. If we, as mobility-choosing academics, want to range wider areas, it doesn't make a huge amount of sense for us to keep writing books. That's like killing all our sheep to sell at the farmer's market. Our mobility strategy has only taken us a small distance beyond the pale. What we need are secondary products. So what do milk-cows and yoghurt look like in the realm of information?
In short, we need technologies that continue to produce as long as they are fed, that give us years of milk instead of days of steak. This means developing databases that integrate with the material they describe. It means machine-learning. It means transferable publishing standards for methods as well as content. It means network methods and complex systems. Again, the food analogy is entirely worthless here because the idea is of recursive rather than iterative methods. It means any type of research that produces semi-autonomous systems of generating questions (that will generate answers when fed), rather than one-off conclusions.
This made sense in my head and a mess on the screen. Tell me what you think.
Friday, September 9
The Nomadic Academic, Part 1
Part 1: Choosing Mobility
At its largest (and smallest) scale, history (and the end of prehistory) is the story of competition for power and resources. Individuals, families, states and empires have competed with each other for ten thousand years and more. Much of that story is about the different strategies employed in this competition: hunting vs. gathering, herding vs. farming, raiding vs. trading... These strategies represent different ways of accumulating resources, and thereby power.
Early on, the critical resource was food - enough to feed the individual and the small group. Families with better strategies for their time and place had more food, and were more likely to thrive. At the beginning of history, the neolithic revolution - domestication of plants and animals - had made food relatively plentiful; people (and to a lesser degree large animals like horses) became the critical resource. States that could control more people (and horses) could win wars, build pyramids and leave records for us to read. Arguably, this continued until the advent of modernity and the arrival of fossil fuels. Coal and oil outweigh human power by enormous factors, and have changed global dynamics - powerful states were now the ones that could control fossil fuel, and later nuclear power.
In each of these periods, there were many different strategies for accumulating food, people and oil. However, we can generally divide these into mobile strategies and sedentary strategies. Hunting requires covering more ground than gathering, and is riskier, but the reward is a big bunch of good food at once. Farming focuses on extracting as many nutrients from a limited space as possible, herding makes up for sparse natural wealth by ranging animals across greater areas. The escalation of these strategies - nomadism and irrigation further differentiate these. Nomads, by covering even more ground can survive in areas of even sparser biomass. Irrigation, as well as fertilizer, pesticides and other farming inputs enable even larger crops out of a small area. There are intermediary strategies - hybrid farming-herding, swidden farming and so on - but these represent the two basic models: stay in one place to concentrate returns, or move faster and further to cover more ground.
The empires that formed on the basis of these divergent strategies looked vastly different. On the one hand, places like China built huge population densities and stored tons of grain. They built walls to keep people in as much as to keep raiders out. They ramped up investment in agriculture, and built standing armies. On the other hand, places like Mongolia remained sparse. People, sheep and horses ranged across large areas. Armies were generally temporary and made up for their inferior numbers with superior mobility. With a horseback army, leaders like Genghis Khan could take on ground-bound forces more than ten times the size, simply by avoiding fighting them all at once. Mobility vs. density. The balance of power between these strategies went back and forth.
Having covered the basic concept of mobility, let's return to the argument that inorganic power sources have changed the game in modernity - that human power is no longer the determinant of strong states. I don't buy that premise on the grounds that food and labor are still clearly important. There are certainly states that are powerful on the basis of their energy resources - Saudi Arabia, Canada... But most of the big powers in the world are populous - China, the US... More importantly, control of money and information has clearly become a major determinant of world power: England is a financial power, Germany a technological power...
So what have been the financial and technological strategies of these powers? For much of the so-called "Information Age" (which I would argue has origins far earlier than the computer), the sedentary strategy has dominated. Learning was kept in books in libraries, and money was kept in banks. This is where computers and the internet came in. Just as nomadism came after farming, the mobility strategy of information followed the sedentary one. Information has become diffuse, and we have begun to domesticate horses that allow us to range across it.
The earliest nomad-farmer battles of the information age came from the horse-breeders - the computer scientists. Hackers were the Scythians, the earliest barbarian raiders. Linus Torvalds and Richard Stallman are our Attila and Timurlane. Open-source software distributes the labor for its creation across hundreds of volunteers, just as nomadic armies were made up of thousands of part-time warriors/full time herders. Wikipedia does similar things (imperfectly) with general knowledge. In perfect irony, open-source hardware is now growing to include technology for building farm equipment. The increasing treatment of financial instruments as pure information has caused all kinds of chaos, and its hard to tell who the barbarians are...
Anyway, this is all a lead-in to what I hope will be an extended meditation. Academia is a largely feudal/bureaucratic institution built to house and control the distribution of knowledge. When the Universities of Bologna, Oxford, Salamanca and Paris, and for that matter the Hanlin Academy and the Library of Alexandria were founded, hand-copied books were the best we had. Knowledge beyond what a single person could create and remember had to be housed in sedentary locations, and it made sense to collect it. But for scholars, the printing press was our wheel, movable type our chariot, the computer our saddle and the internet our stirrup. So I ask, in the middle of our nomadic revolution, what would it look like for an academic to choose mobility?
Reading List
Deleuze and Guattari, A Thousand Plateaus
Own Lattimore, Inner Asian Frontiers of China
David Christian, "Inner Eurasia as a Unit of World History"
At its largest (and smallest) scale, history (and the end of prehistory) is the story of competition for power and resources. Individuals, families, states and empires have competed with each other for ten thousand years and more. Much of that story is about the different strategies employed in this competition: hunting vs. gathering, herding vs. farming, raiding vs. trading... These strategies represent different ways of accumulating resources, and thereby power.
Early on, the critical resource was food - enough to feed the individual and the small group. Families with better strategies for their time and place had more food, and were more likely to thrive. At the beginning of history, the neolithic revolution - domestication of plants and animals - had made food relatively plentiful; people (and to a lesser degree large animals like horses) became the critical resource. States that could control more people (and horses) could win wars, build pyramids and leave records for us to read. Arguably, this continued until the advent of modernity and the arrival of fossil fuels. Coal and oil outweigh human power by enormous factors, and have changed global dynamics - powerful states were now the ones that could control fossil fuel, and later nuclear power.
In each of these periods, there were many different strategies for accumulating food, people and oil. However, we can generally divide these into mobile strategies and sedentary strategies. Hunting requires covering more ground than gathering, and is riskier, but the reward is a big bunch of good food at once. Farming focuses on extracting as many nutrients from a limited space as possible, herding makes up for sparse natural wealth by ranging animals across greater areas. The escalation of these strategies - nomadism and irrigation further differentiate these. Nomads, by covering even more ground can survive in areas of even sparser biomass. Irrigation, as well as fertilizer, pesticides and other farming inputs enable even larger crops out of a small area. There are intermediary strategies - hybrid farming-herding, swidden farming and so on - but these represent the two basic models: stay in one place to concentrate returns, or move faster and further to cover more ground.
The empires that formed on the basis of these divergent strategies looked vastly different. On the one hand, places like China built huge population densities and stored tons of grain. They built walls to keep people in as much as to keep raiders out. They ramped up investment in agriculture, and built standing armies. On the other hand, places like Mongolia remained sparse. People, sheep and horses ranged across large areas. Armies were generally temporary and made up for their inferior numbers with superior mobility. With a horseback army, leaders like Genghis Khan could take on ground-bound forces more than ten times the size, simply by avoiding fighting them all at once. Mobility vs. density. The balance of power between these strategies went back and forth.
Having covered the basic concept of mobility, let's return to the argument that inorganic power sources have changed the game in modernity - that human power is no longer the determinant of strong states. I don't buy that premise on the grounds that food and labor are still clearly important. There are certainly states that are powerful on the basis of their energy resources - Saudi Arabia, Canada... But most of the big powers in the world are populous - China, the US... More importantly, control of money and information has clearly become a major determinant of world power: England is a financial power, Germany a technological power...
So what have been the financial and technological strategies of these powers? For much of the so-called "Information Age" (which I would argue has origins far earlier than the computer), the sedentary strategy has dominated. Learning was kept in books in libraries, and money was kept in banks. This is where computers and the internet came in. Just as nomadism came after farming, the mobility strategy of information followed the sedentary one. Information has become diffuse, and we have begun to domesticate horses that allow us to range across it.
The earliest nomad-farmer battles of the information age came from the horse-breeders - the computer scientists. Hackers were the Scythians, the earliest barbarian raiders. Linus Torvalds and Richard Stallman are our Attila and Timurlane. Open-source software distributes the labor for its creation across hundreds of volunteers, just as nomadic armies were made up of thousands of part-time warriors/full time herders. Wikipedia does similar things (imperfectly) with general knowledge. In perfect irony, open-source hardware is now growing to include technology for building farm equipment. The increasing treatment of financial instruments as pure information has caused all kinds of chaos, and its hard to tell who the barbarians are...
Anyway, this is all a lead-in to what I hope will be an extended meditation. Academia is a largely feudal/bureaucratic institution built to house and control the distribution of knowledge. When the Universities of Bologna, Oxford, Salamanca and Paris, and for that matter the Hanlin Academy and the Library of Alexandria were founded, hand-copied books were the best we had. Knowledge beyond what a single person could create and remember had to be housed in sedentary locations, and it made sense to collect it. But for scholars, the printing press was our wheel, movable type our chariot, the computer our saddle and the internet our stirrup. So I ask, in the middle of our nomadic revolution, what would it look like for an academic to choose mobility?
Reading List
Deleuze and Guattari, A Thousand Plateaus
Own Lattimore, Inner Asian Frontiers of China
David Christian, "Inner Eurasia as a Unit of World History"
Tuesday, March 22
More on what games teach us
I have been thinking a lot recently about the ways in which we are institutionalized. Much of this is intentional, and basically a good thing for social order (i.e. we are taught things like stopping at red lights, driving on the right side of the street, yielding to pedestrians; all things that vary somewhat by area). But there are a lot of institutionalized behaviors that are side-effects, for good or ill. For example, when I was at the IPAM Humanities workshop, the staff noticed that humanities scholars drink twice as much coffee and eat half as much sweets as mathematicians and computer scientists. this is doubtless a side-effect of the types of work that the disciplines require of us, or of the types of personalities attracted to them. But certainly no-one sat down an decided to set up regulations on how much coffee historians should drink vis-a-vis the quota for statisticians.
One of my favorite examples of this type of observation comes from season four of The Wire, when former police major Bunny Colvin works as a consultant for a troubled-youth program in the schools. He observes that the "corner kids" (i.e. the troublemakers) are learning something in school, just not what the schools think they are teaching. Specifically, they are learning how to deal with authorities without "snitching;" skills that will presumably serve them well in their anticipated future careers as drug dealers.
In fact, there has been an increasing amount of research showing that boys (in particular) are not being reached by the institutional structure of schooling. As American schools have targeted improving girls' math-science skills and self-confidence, boys are increasingly being left behind. Ali Carr-Chellman argues that this is in large part because boys' culture, especially video games, are demonized at the schools, and the reward systems of those games offer them institutional alternatives to those offered in school. As a result, boys fail to engage with their teachers; or perhaps more properly, their teachers fail to engage with them.
Jane McGonigal has argued that for most of our and the following generations, gaming takes up as much or more time as schooling, and that it has therefore become a primary medium for teaching us institutions (i.e. "civilizing" us). She thinks that gaming, as it exists, teaches problem-solving skills and a certain sort of ambition. In other words, the institutions and reward structures that gaming teaches us can be harnessed in positive ways. This seems like a big deal, and I will come back to it later.
There are, however, negatives to being institutionalized by games. I don't buy most of the arguments about videogame violence leading to real-world violence. In fact, I think that the parents and media who focus on this have the issue all wrong. To be sure, most of the school shootings of our time have been committed by gamers - but for that matter, most instances of sandwich-buying in the past twenty years have been committed by gamers. More important is that video games seem to socialize kids, especially boys, into certain types of reward systems that often have no real applicability to the real world. This can lead not only to disillusionment with school, but with poor success in many social situations.
To demonstrate this point, I have two somewhat random anecdotes; not entirely convincing, I'll admit...
First, I was at a video-game discussion at THATCamp New England where a participant mentioned an iPhone app called Epic Win. This app is a rather ordinary schedule/to-do list app that adds a reward system familiar to many gamers: after going to the gym, you can give yourself +1 strength, etc. This person said that the reward system (to which he had been institutionalized) made him substantially more likely to do things on his schedule. This seems somewhat benign, but I have known other people who were more driven to do things like exercise, study, and even shower once the rewards for doing so were made explicit in this type of reward structure. Note, for example, the success of the Wii Fit in inspiring weight loss.
The other example is somewhat more sinister. Reading The Game, a book about the world of pick-up artists, struck me in a number of ways. Obviously, there are the reprehensible attitudes about women that pervade the pick-up culture. Also, the rather questionable use of sexual selection theory, especially as popularized by The Red Queen (on which, more in a future post perhaps). Nonetheless, the glimpse into the world of pick-up instructors left me, if anything, feeling sorry for the men most of all. Many of these pick-up artists, and especially the young men who aspire to emulate them, seem to be critically lacking in social skills that would enable them to meet women in more socially acceptable ways. Many of them treat picking up women as, well...a game, often referring to it as such. The positive aspect of learning "the game" seems to be that they acquire more self-confidence. They do this, essentially, by learning how to assign video-game type stats to real world situations, much as in the case of Epic Win, above. In doing so, they are able to apply the task-management skills learned through gaming to the business of making themselves more attractive to women.
The problem is that these skills are still acquired in an artificially imposed context. It strikes me as a case of "cheating" at the game, or "gaming" the system. Gamers can (and often are) split into two or three categories based on their goals in playing. There are some who like to immerse themselves in the artificial reality described by the game, often called "role players." Others tend to focus on how to "beat" the game, often called "power gamers" or "roll players" (based on their focus on dice in tabletop gaming). Finally, there are social gamers, who basically play as an excuse to hang out with their friends. I think that the second group is the one most likely to describe and include people who can have difficulty adjusting their game-based skill-set to other applications. This group includes some generally "positive" behavior, sometimes called "min-maxing" - essentially the process of figuring out how to maximize positive outcomes for a minimum cost, a logical toolset that applies well to things like math, science and economics.
At its extreme, however, this turns into "hacking," "game lawyering" or flat-out cheating - trying to figure out how to exploit holes in the system to "win" in ways not intended by the game. Like Wall Street bankers leveraging their connections to Capitol Hill to figure out how to leverage the latest regulatory shifts, they share long forum posts on how to exploit the latest updates aimed at promoting game balance to do just the opposite. Or they hack into the inner workings to give themselves unlimited gold or super-strength. In the "real world," this equates to anti-social behaviors like insider trading, pettifogging, bribery and graft. It is probably the gamers who tend to these extremes who turn to pick-up artists to learn, not only self-esteem, personal grooming and such, but particular ways of manipulating and deceiving women.
Would this behavior exist without video games? Certainly. Nevertheless, there are several aspects to the internal workings of video game reward systems that seem especially apt to institutionalize gamers to these types of negative attitudes and behaviors. Even when they do not promote extremes of anti-social behavior, most existing games promote certain unrealistic attitudes toward the world:
One of my favorite examples of this type of observation comes from season four of The Wire, when former police major Bunny Colvin works as a consultant for a troubled-youth program in the schools. He observes that the "corner kids" (i.e. the troublemakers) are learning something in school, just not what the schools think they are teaching. Specifically, they are learning how to deal with authorities without "snitching;" skills that will presumably serve them well in their anticipated future careers as drug dealers.
In fact, there has been an increasing amount of research showing that boys (in particular) are not being reached by the institutional structure of schooling. As American schools have targeted improving girls' math-science skills and self-confidence, boys are increasingly being left behind. Ali Carr-Chellman argues that this is in large part because boys' culture, especially video games, are demonized at the schools, and the reward systems of those games offer them institutional alternatives to those offered in school. As a result, boys fail to engage with their teachers; or perhaps more properly, their teachers fail to engage with them.
Jane McGonigal has argued that for most of our and the following generations, gaming takes up as much or more time as schooling, and that it has therefore become a primary medium for teaching us institutions (i.e. "civilizing" us). She thinks that gaming, as it exists, teaches problem-solving skills and a certain sort of ambition. In other words, the institutions and reward structures that gaming teaches us can be harnessed in positive ways. This seems like a big deal, and I will come back to it later.
There are, however, negatives to being institutionalized by games. I don't buy most of the arguments about videogame violence leading to real-world violence. In fact, I think that the parents and media who focus on this have the issue all wrong. To be sure, most of the school shootings of our time have been committed by gamers - but for that matter, most instances of sandwich-buying in the past twenty years have been committed by gamers. More important is that video games seem to socialize kids, especially boys, into certain types of reward systems that often have no real applicability to the real world. This can lead not only to disillusionment with school, but with poor success in many social situations.
To demonstrate this point, I have two somewhat random anecdotes; not entirely convincing, I'll admit...
First, I was at a video-game discussion at THATCamp New England where a participant mentioned an iPhone app called Epic Win. This app is a rather ordinary schedule/to-do list app that adds a reward system familiar to many gamers: after going to the gym, you can give yourself +1 strength, etc. This person said that the reward system (to which he had been institutionalized) made him substantially more likely to do things on his schedule. This seems somewhat benign, but I have known other people who were more driven to do things like exercise, study, and even shower once the rewards for doing so were made explicit in this type of reward structure. Note, for example, the success of the Wii Fit in inspiring weight loss.
The other example is somewhat more sinister. Reading The Game, a book about the world of pick-up artists, struck me in a number of ways. Obviously, there are the reprehensible attitudes about women that pervade the pick-up culture. Also, the rather questionable use of sexual selection theory, especially as popularized by The Red Queen (on which, more in a future post perhaps). Nonetheless, the glimpse into the world of pick-up instructors left me, if anything, feeling sorry for the men most of all. Many of these pick-up artists, and especially the young men who aspire to emulate them, seem to be critically lacking in social skills that would enable them to meet women in more socially acceptable ways. Many of them treat picking up women as, well...a game, often referring to it as such. The positive aspect of learning "the game" seems to be that they acquire more self-confidence. They do this, essentially, by learning how to assign video-game type stats to real world situations, much as in the case of Epic Win, above. In doing so, they are able to apply the task-management skills learned through gaming to the business of making themselves more attractive to women.
The problem is that these skills are still acquired in an artificially imposed context. It strikes me as a case of "cheating" at the game, or "gaming" the system. Gamers can (and often are) split into two or three categories based on their goals in playing. There are some who like to immerse themselves in the artificial reality described by the game, often called "role players." Others tend to focus on how to "beat" the game, often called "power gamers" or "roll players" (based on their focus on dice in tabletop gaming). Finally, there are social gamers, who basically play as an excuse to hang out with their friends. I think that the second group is the one most likely to describe and include people who can have difficulty adjusting their game-based skill-set to other applications. This group includes some generally "positive" behavior, sometimes called "min-maxing" - essentially the process of figuring out how to maximize positive outcomes for a minimum cost, a logical toolset that applies well to things like math, science and economics.
At its extreme, however, this turns into "hacking," "game lawyering" or flat-out cheating - trying to figure out how to exploit holes in the system to "win" in ways not intended by the game. Like Wall Street bankers leveraging their connections to Capitol Hill to figure out how to leverage the latest regulatory shifts, they share long forum posts on how to exploit the latest updates aimed at promoting game balance to do just the opposite. Or they hack into the inner workings to give themselves unlimited gold or super-strength. In the "real world," this equates to anti-social behaviors like insider trading, pettifogging, bribery and graft. It is probably the gamers who tend to these extremes who turn to pick-up artists to learn, not only self-esteem, personal grooming and such, but particular ways of manipulating and deceiving women.
Would this behavior exist without video games? Certainly. Nevertheless, there are several aspects to the internal workings of video game reward systems that seem especially apt to institutionalize gamers to these types of negative attitudes and behaviors. Even when they do not promote extremes of anti-social behavior, most existing games promote certain unrealistic attitudes toward the world:
- The value of essentially everything is knowable and constant.
- Progress is basically linear and generally exponential.
- Outcomes are immediate, visible and significant.
- Gameplay is repeatable, reproducible and transferable.
Labels:
education,
games,
gaming,
history,
institutions,
pick-up artists,
reward systems,
the wire
Wednesday, March 2
What do Blizzard games teach us about political ecology?
In high school and college, I spent a good bit of time playing real time strategy games, especially Starcraft. At some point, I will have to write a post about what these games teach us about technology and social progress, but for now I am interested in the models of ecology and economy that they build. In particular, my recent readings on pollution, weeds and disease have brought the Zerg creep to mind. The creep
(at right) is supposed to be some sort of organic substance that is necessary to support the Zerg buildings, but is impossible for the other races to build on. There have been some rather interesting meditations on the internet on the scientific reasoning behind the creep, as well as some less interesting uses of it as a metaphor for the invasive nature of progressive political thought. But I think at root, the zerg creep represents a particular political ecology of the Zerg civilization. In fact, from a game-play perspective the creep seems to have been a rather significant innovation of the Blizzard team, which they have subsequently employed in their other real time strategy games, including Warcraft 3, where the Undead have a virtually identical ecological creep, in this case called the "blight" (left).
In both cases, the understanding is that these civilizations both depend on and promote a particular ecological formation. Note that this logic is not unique to the Zerg/Undead; for example the Protos are only able to build within a certain radius of their power-generating pylons. In any case, these formulations promote a very visual, somewhat simplified understanding of the miasmatic, wake-type environmental effects of civilizations. For example, in Ecological Imperialism, Alfred Crosby makes the case that the particular ecology that developed around the European farming complex expanded with European settlers. This "creep" included not only the intentional promotion of plants and animals beneficial under the European wheat-and-livestock based political ecology (and economy), but also "side-effect" weeds and nuisance species, like crabgrass and rats. Like the Zerg creep, this had the dual effects of making the landscape more suitable for this political ecology and less suitable for others.
These "creep"-like phenomena can be seen in a lot of historical processes, ranging from the disease front accompanying (and preceding) colonization and warfare detailed by Diamond and McNeil, to the crops and weeds explored by Crosby, to the cycles of pollution and depletion promoted by/promoting artificial fertilizer/pesticide/herbicide use in modern industrial agriculture (not to speak of the debt cycles implicated therein).
So what do these games have to teach about political ecologies? I think they help foreground the inherently spacial/topological nature of these phenomena. In studying political economy, it is easy to be tempted to abstract relational processes to network maps, and differential phenomena to categories. This is patently true of much epidemiology and sociology since the advent of regression economics and germ theory. Most diseases are analyzed by some combination of their proximate vectors of transmission and social categories of risk. For example, we tend to think of AIDS as transmitted from person to person (primarily by sexual contact), with certain categorical risk factors like race and sexual orientation. We "know" that malaria is transmitted by mosquitoes and cholera by bad water, and that cancer is based on your genes (which you get from your parents) and your behavior.
In fact, the miasmatic, ecological understanding of these diseases is also, in a sense, "correct." More importantly, it is useful. Malaria's proximate vector is the mosquito, but it is spatially associated with swamps; AIDS (in a previous era), with bathhouses; cancer with toxic waste dumps.
Likewise, other social phenomena must be understood to have a miasmatic nature. Agriculture is not just about the chains of production and consumption, it is not just about understandings of nature, it is very much situated in physical space and has a transformative effect on that space. This type of understanding is easily lost in a lot of environmental histories (ironically enough), as they become obsessed with conceptions of nature, or energy flows. Disease, pollution, weeds, "creep" have the advantage or reminding us of the importance of space. This is very much Linda Nash's argument in Inescapable Ecologies, but this understanding is equally visible, if not more so, in the Zerg creep shown above.
(at right) is supposed to be some sort of organic substance that is necessary to support the Zerg buildings, but is impossible for the other races to build on. There have been some rather interesting meditations on the internet on the scientific reasoning behind the creep, as well as some less interesting uses of it as a metaphor for the invasive nature of progressive political thought. But I think at root, the zerg creep represents a particular political ecology of the Zerg civilization. In fact, from a game-play perspective the creep seems to have been a rather significant innovation of the Blizzard team, which they have subsequently employed in their other real time strategy games, including Warcraft 3, where the Undead have a virtually identical ecological creep, in this case called the "blight" (left).
In both cases, the understanding is that these civilizations both depend on and promote a particular ecological formation. Note that this logic is not unique to the Zerg/Undead; for example the Protos are only able to build within a certain radius of their power-generating pylons. In any case, these formulations promote a very visual, somewhat simplified understanding of the miasmatic, wake-type environmental effects of civilizations. For example, in Ecological Imperialism, Alfred Crosby makes the case that the particular ecology that developed around the European farming complex expanded with European settlers. This "creep" included not only the intentional promotion of plants and animals beneficial under the European wheat-and-livestock based political ecology (and economy), but also "side-effect" weeds and nuisance species, like crabgrass and rats. Like the Zerg creep, this had the dual effects of making the landscape more suitable for this political ecology and less suitable for others.
These "creep"-like phenomena can be seen in a lot of historical processes, ranging from the disease front accompanying (and preceding) colonization and warfare detailed by Diamond and McNeil, to the crops and weeds explored by Crosby, to the cycles of pollution and depletion promoted by/promoting artificial fertilizer/pesticide/herbicide use in modern industrial agriculture (not to speak of the debt cycles implicated therein).
So what do these games have to teach about political ecologies? I think they help foreground the inherently spacial/topological nature of these phenomena. In studying political economy, it is easy to be tempted to abstract relational processes to network maps, and differential phenomena to categories. This is patently true of much epidemiology and sociology since the advent of regression economics and germ theory. Most diseases are analyzed by some combination of their proximate vectors of transmission and social categories of risk. For example, we tend to think of AIDS as transmitted from person to person (primarily by sexual contact), with certain categorical risk factors like race and sexual orientation. We "know" that malaria is transmitted by mosquitoes and cholera by bad water, and that cancer is based on your genes (which you get from your parents) and your behavior.
In fact, the miasmatic, ecological understanding of these diseases is also, in a sense, "correct." More importantly, it is useful. Malaria's proximate vector is the mosquito, but it is spatially associated with swamps; AIDS (in a previous era), with bathhouses; cancer with toxic waste dumps.
Likewise, other social phenomena must be understood to have a miasmatic nature. Agriculture is not just about the chains of production and consumption, it is not just about understandings of nature, it is very much situated in physical space and has a transformative effect on that space. This type of understanding is easily lost in a lot of environmental histories (ironically enough), as they become obsessed with conceptions of nature, or energy flows. Disease, pollution, weeds, "creep" have the advantage or reminding us of the importance of space. This is very much Linda Nash's argument in Inescapable Ecologies, but this understanding is equally visible, if not more so, in the Zerg creep shown above.
Friday, February 25
Diet, natural selection, the zerg rush and the zombie apocalypse
Unnecessary personal introduction
I have been reading a lot recently on diets related to reconstructions of what prehistoric humans supposedly ate. Under names like The Primal Blueprint or the Paleo Diet, they make the basic argument that the evolution we have undergone since the domestication of agriculture is minimal compared to the evolution we underwent prior to that. Meaning, pragmatically, that we are not evolutionarily selected to do things like eat grains or wear shoes. This strain of thought seems to be based in a long stream of Romantic views of nature, but also in some hard science. Indicating, for example, that hunter-gatherers who don't eat much sugar or grains have better tooth and jaw structure (based in the research of Weston Price); that there is little connection between high dietary cholesterol and heart disease and so on.
The fact is, I've tried reducing my sugar consumption to near zero, and my grain-based carbohydrate consumption (esp. at lunch) to low levels, and I have found myself rewarded with higher energy levels and getting sick very rarely. My wife suggests that I have some deep discontent with society that leads me to seek outsider views, especially relating to my diet. And this is doubtless true - I was vegetarian on and off for about four or five years, played around with raw food briefly, etc. Nonetheless, I have been more satisfied with the short term effects of reducing sugar and grain consumption and upping animal fat consumption; short, intense, whole-body workout; minimal shoes and so on than I ever was eating lots of beans and jogging. The question of the long-term effects is still a somewhat open question, as far as I am concerned, although I am coming around to the opinion (belief) that limiting inflammation is much more important than minimizing blood cholesterol.
Personal experience aside, there is something deeply unsatisfying about the argumentation of a lot of the paleo party. On the one hand, they delight in ridiculing the laboratory reductionism of big science, pointing out population studies (like Weston Price's surveys of native populations, the Framingham Study, etc.) that support their positions, and selectively choosing examples of the most successful remaining hunter gatherers. On the other hand, they delight in citing instances of laboratory science that support their conclusions, or arguing study against study. Most existing criticism of this line of thinking has been even less self-critical, largely relying on what paleos dismiss as a "conventional wisdom" supported by an interested government-agribusiness-pharmaceutical alliance of interests. The lab study vs. lab study type of argument reminds me of nothing more than that line about statistics; the specifics of study design and similar factors mean that there is a huge tendency toward confirmation bias, even when researchers don't intend it. Population studies are notoriously difficult to unpack, and I can't really tell whether either side is selectively citing examples of hunter-gatherers. I will arbitrarily call these positions a wash. To me, the argument rests largely on the application of theory, in particular evolution.
The evolutionary argument for paleo
The best argument that the paleos trot out, for which I would direct you to Mark Sisson, is that agriculture is:
Implicit in this argument is the assumption that we have moved to a fundamentally different mortality regime. The prehistoric mortality regime was essentially random and selective - death was due primarily to accidents, seasonal starvation and only rarely infectious disease (essentially similar to most wild animal mortality patterns). This produced a population curve that tapered off rather steeply, but evenly across all ages (after especially steep infant mortality). This meant that barring accident or poor hunting performance, fifty or sixty-year-olds were nearly as healthy and able as twenty or thirty-year-olds, otherwise they would be unable to provide for themselves and die. It also provided for natural selection to take place on the individual level - poor hunters or gatherers were more likely to die of starvation or accident - and selected for people who do well eating meat and fish, fruits, vegetables and tubers.
The historic, premodern mortality regime was random but not selective, infectious disease based. Its shape was steep among infants and children, shallower during late childhood and early adulthood, and steeper again in middle age and beyond. The poor diet in this period meant a faster aging process, and an illness- and malnutrition-related die-off starting in the late 30s due in large part to things like tooth decay. The randomness was more likely to wipe out entire populations, but beyond infant mortality, it meant that people were more or less equally likely to live to childbearing age. Thus a continuation of poor health among the majority of the population, still selected to eat lots of meat and veg but unable to do so.
The modern mortality regime is both genetic and environmental, but not selective, chronic disease-based. The population curve has a much smaller infant die-off, some limited accident-based mortality, and very few deaths to infectious disease - basically a shallow slope into middle age. At middle-age, the curve steepens due to the beginnings of chronic disease mortality and again in 70s and 80s as chronic illness and age-related illness combine. Those with "bad genes" die of heart attacks in their forties and fifties, and those with "good genes" of stroke, organ failure, and decrepitude-related disease/accident combination in their 80s and 90s. The conventional wisdom is that we now die of cancer and heart disease because we live long enough to. The paleo critique is that we die of cancer and heart disease because we live long enough, and our behavior promotes it; and that both middle-age heart attacks and old age decrepitude are results of our behavior as much as our genes. In either case, we live long enough to pass on our paleo genes - its just that now most people live that long, rather than a large, random subset of them dying due to infectious disease.
My critique
As I see it, the problem with this argument is that it overlooks the effect of group selection on evolution. At the individual level, the changing mortality regime would seem to indicate that we are still hunter-gatherers at heart (and stomach, and DNA). But group-level and population-level effects are enormous! The transition to agriculture made major changes by building big enough population bases for infectious disease. This changed the pattern of our mortality, but it also changed the pattern of our responses. The last post was about some of these - the various things we have called "hygiene." Most of these responses are now operating on the level of "culture" rather than "nature" - memes rather than genes - but they still have genetic effects!
Consider this: John McNeal asserts that 20% of all human years lived in the past 40,000 years were lived in the past 100. This is not the same as saying that 20% of all people who lived in the past 40k lived in the past century, because our lifetimes are longer, so let's conservatively say that this is equivalent to 5-10% of people. This means that the vast majority of population expansion happened in the modern era. If we assume that people in the past had significantly more than two children per woman on average (i.e. above the replacement rate), this means that mortality was very high to prevent wild population expansion. This is not a controversial argument. This means that mortality was the dominant control on population. Non-random mortality is the hand of evolution. We have established mortality, if we establish non-randomness, then we have evolution.
Randomness may be a much more difficult question. We tend to think of disease as relatively random, but it is not the case. Recall the massive demographic effect of the plague on Europe or smallpox on the Americas. These had the effect of greatly reducing certain genetic pools while providing population vacuums for the immune to fill - i.e. for the demographic expansion of the Mongols, or of the Spanish, British, French genetic material. So it seems likely that we are selected on the group level for the types of people who make good conquerors. Who makes good conquerors? Nomads? Farmers? Definitely not hunter-gatherers.
In the early stage of Chinese history, it appears that farmers and hunters were rather balanced - see the long wars between the Shang and Zhou states and the four barbarians. But over time, the farmers won out. This was probably not because the farmers were individually better warriors, evidence seems to indicate quite the contrary, that the hunters were much better fighters, larger in stature and probably healthier. But over time, the weight of population, of energy, of technology came out on the side of the farmers - there were more of them, with better weapons. Not only this, but disease was on the side of the farmers. The farmers definitively won, there are very few hunter-gatherers left in the world, and only in areas where extreme climate or disease regimes keep agriculture at bay.
Later, the main opponent of the farmers was the nomads. It is not clear that this period of competition lasted longer than the farmer-hunter wars, but it is much better documented. Again, this appears to be a case where the nomads were probably more healthy. The nomads also had another energy reserve - livestock - on their side. They also seem to have been less susceptible to epidemic disease genocide. But the lost in the end to a gradual rise in demographic pressure and shifting of the technological balance. The role of non-organic energy may have also been critical - they were generally tamed by gunpowder empires and destroyed by fossil-fueled industrial farming and mining concerns.
There were also the wars between farmers and other farmers, or farmers and hybrid farmer-herders. This is where I would be interested to look for group selection. In the case of farmer vs. hunter, the energy balance was too uneven to promote any clear evolutionary pressure - even unhealthy farmers could defeat hunters. On the other hand when two different pools of farmers competed, the ones better able to use grain-based energy would have been less susceptible to disease, made healthier fighters, and seen fewer children die before they could become soldiers. This makes me wonder whether farming groups that adapted genetically, not just culturally, to farming would have survived better.
Finally, there are several pieces of evidence for evolution in historical time. Lactase persistence (i.e. not developing lactose intolerance as an adult), which would give those with the mutation an advantage over others, has clearly developed since the domestication of livestock among farming communities. Some native Mexican and South Chinese groups exhibit a "thrifty gene" that allows them to maximize nutritional content from grain consumption, but leads to diabetes when high levels of grain are consumed. This would have enabled easier population expansion of this group, at the expense of those who did not extract nutrients as readily.
There is probably also a great deal of genetic selection based on memetic selection. In other words, groups that produced good social ideas were likely to out-compete those that didn't. As these groups were generally based, at least to some degree, in kinship groups. Even within relatively static, peaceful populations, the expanded technological ability, capital resources etc. of successful families would likely promote their genetic material at the expense of the less sucessful. These forms of memetic evolution would tend to promote associated genetic material. While this selection is not directly on nutrition-related attributes, the connection between the political economy and food production is too close to assume that social behavior and genetic adaptations are completely unrelated.
The zerg rush and the zombie apocalypse
At base, I take the argument that individual evolution has not proceeded at the same pace, but I argue that group evolution has still occurred. The paradox of group evolution is that it seems to promote something like the zerg rush, the idea that hordes of lesser individuals can overcome a smaller group of more powerful ones. I feel that this has more potential to explain the entire system that both paleo and conventionals are part of - one in which the power of the large group depends on a system that results in the relative poverty and poor health of its constituent individuals. At the same time, the demographic power seems more like a slow creep than a sprint, something more like the zombie apocalypse. This has the metaphorical advantage of suggesting the role that the very illness of that group plays in its success.
This post has grown too long and incoherent, but I have a final, more hopeful suggestion. It seems that the power of the farm - the poor and dense imperative - was rather well balanced against that of the nomad - more individual power and sparser settlement - until the advent of fossil fuels. The biggest problem with the paleo solution is that it remains largely individual and dependent on volition, in the face of the power of the state and market. I have done little to address the social and moral aspects of paleo in the modern world (a topic for another post), but I would suggest that it is a small-scale solution at best, until paired with a viable political economy. Hunter has little to offer in those terms, but herder might.
I have been reading a lot recently on diets related to reconstructions of what prehistoric humans supposedly ate. Under names like The Primal Blueprint or the Paleo Diet, they make the basic argument that the evolution we have undergone since the domestication of agriculture is minimal compared to the evolution we underwent prior to that. Meaning, pragmatically, that we are not evolutionarily selected to do things like eat grains or wear shoes. This strain of thought seems to be based in a long stream of Romantic views of nature, but also in some hard science. Indicating, for example, that hunter-gatherers who don't eat much sugar or grains have better tooth and jaw structure (based in the research of Weston Price); that there is little connection between high dietary cholesterol and heart disease and so on.
The fact is, I've tried reducing my sugar consumption to near zero, and my grain-based carbohydrate consumption (esp. at lunch) to low levels, and I have found myself rewarded with higher energy levels and getting sick very rarely. My wife suggests that I have some deep discontent with society that leads me to seek outsider views, especially relating to my diet. And this is doubtless true - I was vegetarian on and off for about four or five years, played around with raw food briefly, etc. Nonetheless, I have been more satisfied with the short term effects of reducing sugar and grain consumption and upping animal fat consumption; short, intense, whole-body workout; minimal shoes and so on than I ever was eating lots of beans and jogging. The question of the long-term effects is still a somewhat open question, as far as I am concerned, although I am coming around to the opinion (belief) that limiting inflammation is much more important than minimizing blood cholesterol.
Personal experience aside, there is something deeply unsatisfying about the argumentation of a lot of the paleo party. On the one hand, they delight in ridiculing the laboratory reductionism of big science, pointing out population studies (like Weston Price's surveys of native populations, the Framingham Study, etc.) that support their positions, and selectively choosing examples of the most successful remaining hunter gatherers. On the other hand, they delight in citing instances of laboratory science that support their conclusions, or arguing study against study. Most existing criticism of this line of thinking has been even less self-critical, largely relying on what paleos dismiss as a "conventional wisdom" supported by an interested government-agribusiness-pharmaceutical alliance of interests. The lab study vs. lab study type of argument reminds me of nothing more than that line about statistics; the specifics of study design and similar factors mean that there is a huge tendency toward confirmation bias, even when researchers don't intend it. Population studies are notoriously difficult to unpack, and I can't really tell whether either side is selectively citing examples of hunter-gatherers. I will arbitrarily call these positions a wash. To me, the argument rests largely on the application of theory, in particular evolution.
The evolutionary argument for paleo
The best argument that the paleos trot out, for which I would direct you to Mark Sisson, is that agriculture is:
- Too recent a development to have effects on evolution.
- A system that makes it too easy for the weak to survive to reproductive age - by producing an easy surplus - thereby allowing all genes to be perpetuated.
Implicit in this argument is the assumption that we have moved to a fundamentally different mortality regime. The prehistoric mortality regime was essentially random and selective - death was due primarily to accidents, seasonal starvation and only rarely infectious disease (essentially similar to most wild animal mortality patterns). This produced a population curve that tapered off rather steeply, but evenly across all ages (after especially steep infant mortality). This meant that barring accident or poor hunting performance, fifty or sixty-year-olds were nearly as healthy and able as twenty or thirty-year-olds, otherwise they would be unable to provide for themselves and die. It also provided for natural selection to take place on the individual level - poor hunters or gatherers were more likely to die of starvation or accident - and selected for people who do well eating meat and fish, fruits, vegetables and tubers.
The historic, premodern mortality regime was random but not selective, infectious disease based. Its shape was steep among infants and children, shallower during late childhood and early adulthood, and steeper again in middle age and beyond. The poor diet in this period meant a faster aging process, and an illness- and malnutrition-related die-off starting in the late 30s due in large part to things like tooth decay. The randomness was more likely to wipe out entire populations, but beyond infant mortality, it meant that people were more or less equally likely to live to childbearing age. Thus a continuation of poor health among the majority of the population, still selected to eat lots of meat and veg but unable to do so.
The modern mortality regime is both genetic and environmental, but not selective, chronic disease-based. The population curve has a much smaller infant die-off, some limited accident-based mortality, and very few deaths to infectious disease - basically a shallow slope into middle age. At middle-age, the curve steepens due to the beginnings of chronic disease mortality and again in 70s and 80s as chronic illness and age-related illness combine. Those with "bad genes" die of heart attacks in their forties and fifties, and those with "good genes" of stroke, organ failure, and decrepitude-related disease/accident combination in their 80s and 90s. The conventional wisdom is that we now die of cancer and heart disease because we live long enough to. The paleo critique is that we die of cancer and heart disease because we live long enough, and our behavior promotes it; and that both middle-age heart attacks and old age decrepitude are results of our behavior as much as our genes. In either case, we live long enough to pass on our paleo genes - its just that now most people live that long, rather than a large, random subset of them dying due to infectious disease.
My critique
As I see it, the problem with this argument is that it overlooks the effect of group selection on evolution. At the individual level, the changing mortality regime would seem to indicate that we are still hunter-gatherers at heart (and stomach, and DNA). But group-level and population-level effects are enormous! The transition to agriculture made major changes by building big enough population bases for infectious disease. This changed the pattern of our mortality, but it also changed the pattern of our responses. The last post was about some of these - the various things we have called "hygiene." Most of these responses are now operating on the level of "culture" rather than "nature" - memes rather than genes - but they still have genetic effects!
Consider this: John McNeal asserts that 20% of all human years lived in the past 40,000 years were lived in the past 100. This is not the same as saying that 20% of all people who lived in the past 40k lived in the past century, because our lifetimes are longer, so let's conservatively say that this is equivalent to 5-10% of people. This means that the vast majority of population expansion happened in the modern era. If we assume that people in the past had significantly more than two children per woman on average (i.e. above the replacement rate), this means that mortality was very high to prevent wild population expansion. This is not a controversial argument. This means that mortality was the dominant control on population. Non-random mortality is the hand of evolution. We have established mortality, if we establish non-randomness, then we have evolution.
Randomness may be a much more difficult question. We tend to think of disease as relatively random, but it is not the case. Recall the massive demographic effect of the plague on Europe or smallpox on the Americas. These had the effect of greatly reducing certain genetic pools while providing population vacuums for the immune to fill - i.e. for the demographic expansion of the Mongols, or of the Spanish, British, French genetic material. So it seems likely that we are selected on the group level for the types of people who make good conquerors. Who makes good conquerors? Nomads? Farmers? Definitely not hunter-gatherers.
In the early stage of Chinese history, it appears that farmers and hunters were rather balanced - see the long wars between the Shang and Zhou states and the four barbarians. But over time, the farmers won out. This was probably not because the farmers were individually better warriors, evidence seems to indicate quite the contrary, that the hunters were much better fighters, larger in stature and probably healthier. But over time, the weight of population, of energy, of technology came out on the side of the farmers - there were more of them, with better weapons. Not only this, but disease was on the side of the farmers. The farmers definitively won, there are very few hunter-gatherers left in the world, and only in areas where extreme climate or disease regimes keep agriculture at bay.
Later, the main opponent of the farmers was the nomads. It is not clear that this period of competition lasted longer than the farmer-hunter wars, but it is much better documented. Again, this appears to be a case where the nomads were probably more healthy. The nomads also had another energy reserve - livestock - on their side. They also seem to have been less susceptible to epidemic disease genocide. But the lost in the end to a gradual rise in demographic pressure and shifting of the technological balance. The role of non-organic energy may have also been critical - they were generally tamed by gunpowder empires and destroyed by fossil-fueled industrial farming and mining concerns.
There were also the wars between farmers and other farmers, or farmers and hybrid farmer-herders. This is where I would be interested to look for group selection. In the case of farmer vs. hunter, the energy balance was too uneven to promote any clear evolutionary pressure - even unhealthy farmers could defeat hunters. On the other hand when two different pools of farmers competed, the ones better able to use grain-based energy would have been less susceptible to disease, made healthier fighters, and seen fewer children die before they could become soldiers. This makes me wonder whether farming groups that adapted genetically, not just culturally, to farming would have survived better.
Finally, there are several pieces of evidence for evolution in historical time. Lactase persistence (i.e. not developing lactose intolerance as an adult), which would give those with the mutation an advantage over others, has clearly developed since the domestication of livestock among farming communities. Some native Mexican and South Chinese groups exhibit a "thrifty gene" that allows them to maximize nutritional content from grain consumption, but leads to diabetes when high levels of grain are consumed. This would have enabled easier population expansion of this group, at the expense of those who did not extract nutrients as readily.
There is probably also a great deal of genetic selection based on memetic selection. In other words, groups that produced good social ideas were likely to out-compete those that didn't. As these groups were generally based, at least to some degree, in kinship groups. Even within relatively static, peaceful populations, the expanded technological ability, capital resources etc. of successful families would likely promote their genetic material at the expense of the less sucessful. These forms of memetic evolution would tend to promote associated genetic material. While this selection is not directly on nutrition-related attributes, the connection between the political economy and food production is too close to assume that social behavior and genetic adaptations are completely unrelated.
The zerg rush and the zombie apocalypse
At base, I take the argument that individual evolution has not proceeded at the same pace, but I argue that group evolution has still occurred. The paradox of group evolution is that it seems to promote something like the zerg rush, the idea that hordes of lesser individuals can overcome a smaller group of more powerful ones. I feel that this has more potential to explain the entire system that both paleo and conventionals are part of - one in which the power of the large group depends on a system that results in the relative poverty and poor health of its constituent individuals. At the same time, the demographic power seems more like a slow creep than a sprint, something more like the zombie apocalypse. This has the metaphorical advantage of suggesting the role that the very illness of that group plays in its success.
This post has grown too long and incoherent, but I have a final, more hopeful suggestion. It seems that the power of the farm - the poor and dense imperative - was rather well balanced against that of the nomad - more individual power and sparser settlement - until the advent of fossil fuels. The biggest problem with the paleo solution is that it remains largely individual and dependent on volition, in the face of the power of the state and market. I have done little to address the social and moral aspects of paleo in the modern world (a topic for another post), but I would suggest that it is a small-scale solution at best, until paired with a viable political economy. Hunter has little to offer in those terms, but herder might.
Thursday, February 24
Energy, Technology, Disease and Hygiene
Forget for a moment about all of the specifics of form and think for a moment about the basis of power. The physics of it even.
P = ΔW/Δt
In social terms, power is the ability to get work done in a certain amount of time. Work, of course, is simply application of energy.
W = ΔE
So power, including social power, is the ability to bring a certain amount of stored energy to bear on a problem within a given time. This means that social actors looking to exert power want to be able to store energy, and then apply it.
In the modern world, this is made incredibly complex by things like fossil fuels, which are huge storehouses of energy that can by used very quickly, and electricity, which is a relatively efficient means of transferring energy from one place to another quickly.
So let's step back to the premodern context. Energy still came in all the forms that it comes in now: mechanical energy, chemical energy, light, heat... The major difference was that the ability to collect, store, convert and apply energy were much more limited.
The ultimate source of energy is light, which plants convert to stored chemical energy. This energy can then be stored or converted into mechanical energy by animals, or to heat and light by burning. In a more indirect and less important sense, it can be converted into a peculiar form of potential energy in the form of structures.
But basically, we are concerned with mechanical energy, because this has been the primary form in which social actors wanted to exert power. In some sense, human history is the story of collecting and applying mechanical energy. In the premodern context, the power of an individual was largely determined by his or her own ability to convert energy. The power of an institution depended largely on how many people and animals it could control, and thereby make use of their energy. A family generally wanted as many children as it could feed so that it could use their labor. A state likewise wanted as many subjects as it could control, so that it could make use of their labor.
From the case of a single individual gathering plants, to the highly complex premodern empires with systems of taxation and warfare, the system at its root was about getting the most humans (and labor animals) to convert the most plants into energy. This energy could be applied in public works, in creating art, in warfare...whatever. Surplus labor for these things was a matter of surplus mechanical energy, primarily in human form. The development of complex states out of small kin-groups was essentially a matter of developing better technologies to create and organize this surplus. We can make the simplifying categorization of this into two forms of technology:
So a quick set of examples. An early state, developed around irrigation would produce higher crop yields, and in turn a larger population that could be harnessed for its projects. These crop yields could also be stored, to a limited extent, for feeding future populations and making use of their labor in the future. But this was the only reason to store grain! In the second set of technologies, means of organizing and controlling this expanded population, to get them to build the pyramids or wage war for example, were also developed.
This hints at the major complication, which is that development of one type of technology tended to spur development of the other type. An increased population could not easily be controlled by the same institutions used to manage smaller ones. Likewise, bigger, well-organized populations could be funneled into developing more efficient means of gathering chemical energy in plant form. This is, in some ways a more general form of the Wittfogel thesis that despotic states emerged to control and organize the surplus of irrigation projects.
But its more complex than that. As the elder McNeil pointed out, the surplus of stored energy represented in a growing demographic base was subject to the predations of both micro- and macro-parasites, i.e. diseases and states. The concentration of stored energy created a breeding ground for both new social forms of extraction and for the maintenance of endemic disease. So in extracting and applying more of its population's energy, a state had to compete with plagues.
Both micro- and macro-parasites are further subject to the rule of network effects and the power law. That is to say, as ever-larger systems became integrated, they were more sensitive to propagating failures. Larger empires are more subject to collapse from even localized rebellion and disorder; larger markets are more subject to collapse from even localized recession; larger demographic pools are more subject to disease epidemics.
Thus, through much of history, as new production technologies expanded the energy (aka population) base, the chance of system failure increased. This would typically result in a cycle of dynastic rise, period of division, and the subsequent rise of a new empire, generally built around a new control technology.
It is especially worth noting that these control technologies existed at all scales, and came to include many things that now appear as things that can be grouped under the overdetermined term "hygiene" - everything from ethnicity to manners to epidemiology. I would argue that the changing nature of this term reflected the changing nature of the disease threat. Avoidance of certain foods has always been a good way of preventing diseases from crossing over to humans from their animal source. Race and ethnic taboos likely had their origin in keeping disease pools separate, perhaps in order to prevent cascading system failure by keeping sub-networks partially separated. As this became increasingly difficult, manners perpetuated forms of social separation, and personal hygiene came to encompass things like regular bathing. At the dawn of modernity, the integration of disparate supply networks in goods (as well as people) created a greater need for public health professionals and the modern concept of hygiene.
But control technologies existed in other forms as well. In the economic realm, coinage, finance etc provided more efficient extraction, storage, and some protection against system failure. But these technologies only made sense in an already-connected world.
This is only a preliminary sketch attempting to integrate a variety of very different theories.
Reading list:
McNeil, Plagues and Peoples
McNeil, Something New Under the Sun
Davis, Late Victorian Holocausts
Wittfogel, Hydraulic Despotism
Rogaski, Hygienic Modernity
Evans, Death in Hamburg
Mennel, Norbert Elias: An Introduction
Johnson, The Ghost Map
Diamond, Guns, Germs and Steel
P = ΔW/Δt
In social terms, power is the ability to get work done in a certain amount of time. Work, of course, is simply application of energy.
W = ΔE
So power, including social power, is the ability to bring a certain amount of stored energy to bear on a problem within a given time. This means that social actors looking to exert power want to be able to store energy, and then apply it.
In the modern world, this is made incredibly complex by things like fossil fuels, which are huge storehouses of energy that can by used very quickly, and electricity, which is a relatively efficient means of transferring energy from one place to another quickly.
So let's step back to the premodern context. Energy still came in all the forms that it comes in now: mechanical energy, chemical energy, light, heat... The major difference was that the ability to collect, store, convert and apply energy were much more limited.
The ultimate source of energy is light, which plants convert to stored chemical energy. This energy can then be stored or converted into mechanical energy by animals, or to heat and light by burning. In a more indirect and less important sense, it can be converted into a peculiar form of potential energy in the form of structures.
But basically, we are concerned with mechanical energy, because this has been the primary form in which social actors wanted to exert power. In some sense, human history is the story of collecting and applying mechanical energy. In the premodern context, the power of an individual was largely determined by his or her own ability to convert energy. The power of an institution depended largely on how many people and animals it could control, and thereby make use of their energy. A family generally wanted as many children as it could feed so that it could use their labor. A state likewise wanted as many subjects as it could control, so that it could make use of their labor.
From the case of a single individual gathering plants, to the highly complex premodern empires with systems of taxation and warfare, the system at its root was about getting the most humans (and labor animals) to convert the most plants into energy. This energy could be applied in public works, in creating art, in warfare...whatever. Surplus labor for these things was a matter of surplus mechanical energy, primarily in human form. The development of complex states out of small kin-groups was essentially a matter of developing better technologies to create and organize this surplus. We can make the simplifying categorization of this into two forms of technology:
- Production technology, generally physical technology.
- Control technology, generally social technology.
So a quick set of examples. An early state, developed around irrigation would produce higher crop yields, and in turn a larger population that could be harnessed for its projects. These crop yields could also be stored, to a limited extent, for feeding future populations and making use of their labor in the future. But this was the only reason to store grain! In the second set of technologies, means of organizing and controlling this expanded population, to get them to build the pyramids or wage war for example, were also developed.
This hints at the major complication, which is that development of one type of technology tended to spur development of the other type. An increased population could not easily be controlled by the same institutions used to manage smaller ones. Likewise, bigger, well-organized populations could be funneled into developing more efficient means of gathering chemical energy in plant form. This is, in some ways a more general form of the Wittfogel thesis that despotic states emerged to control and organize the surplus of irrigation projects.
But its more complex than that. As the elder McNeil pointed out, the surplus of stored energy represented in a growing demographic base was subject to the predations of both micro- and macro-parasites, i.e. diseases and states. The concentration of stored energy created a breeding ground for both new social forms of extraction and for the maintenance of endemic disease. So in extracting and applying more of its population's energy, a state had to compete with plagues.
Both micro- and macro-parasites are further subject to the rule of network effects and the power law. That is to say, as ever-larger systems became integrated, they were more sensitive to propagating failures. Larger empires are more subject to collapse from even localized rebellion and disorder; larger markets are more subject to collapse from even localized recession; larger demographic pools are more subject to disease epidemics.
Thus, through much of history, as new production technologies expanded the energy (aka population) base, the chance of system failure increased. This would typically result in a cycle of dynastic rise, period of division, and the subsequent rise of a new empire, generally built around a new control technology.
It is especially worth noting that these control technologies existed at all scales, and came to include many things that now appear as things that can be grouped under the overdetermined term "hygiene" - everything from ethnicity to manners to epidemiology. I would argue that the changing nature of this term reflected the changing nature of the disease threat. Avoidance of certain foods has always been a good way of preventing diseases from crossing over to humans from their animal source. Race and ethnic taboos likely had their origin in keeping disease pools separate, perhaps in order to prevent cascading system failure by keeping sub-networks partially separated. As this became increasingly difficult, manners perpetuated forms of social separation, and personal hygiene came to encompass things like regular bathing. At the dawn of modernity, the integration of disparate supply networks in goods (as well as people) created a greater need for public health professionals and the modern concept of hygiene.
But control technologies existed in other forms as well. In the economic realm, coinage, finance etc provided more efficient extraction, storage, and some protection against system failure. But these technologies only made sense in an already-connected world.
This is only a preliminary sketch attempting to integrate a variety of very different theories.
Reading list:
McNeil, Plagues and Peoples
McNeil, Something New Under the Sun
Davis, Late Victorian Holocausts
Wittfogel, Hydraulic Despotism
Rogaski, Hygienic Modernity
Evans, Death in Hamburg
Mennel, Norbert Elias: An Introduction
Johnson, The Ghost Map
Diamond, Guns, Germs and Steel
Subscribe to:
Posts (Atom)