在二十世纪五、六十年代,卡内基理工新星云集,辉煌璀璨。以Herb Simon,Dick Cyert,Jim March为首的学者创立了所谓的卡内基学派,用严谨的数理模型和新兴的计算机模拟等方法深入探究行为、组织与决策等课题。
虽然是心理学家、组织学者和计算机专家,Simon在1978年以Bounded Rationality等贡献获得经济学诺奖。此前,他曾在1975年获得图灵(Turing)奖。当时先后在卡内基任教的Franco Modigliani,Merton Miller, Robert Lucas等金融学家和经济学家亦是锋芒出露。三人后来均获诺奖。
1960年代中期的卡内基博士生中,Olly Williamson,Ed Prescott和Dale Mortensen之后亦获经济学诺奖。
如果管理学也有诺奖,当年卡内基的学生中至少有两个可以作为候选人。一个是1964年获得博士学位的Bill Starbuck。一个是1968年用四年时间完成本硕连读的Jeff Pfeffer。二人皆可称为管理学界的全才。OB/OT/HR/Strategy通吃。
更为全面,也是更为“庞杂”的,当属Bill Starbuck。他在涉猎范围上更为广袤,方法论上比Pfeffer更为广博抑或精准。
昨天说了费老师,今天聊聊星巴克。
Bill Starbuck: A life-long commitment to management research and learning. A renaissance man. A genuine scholar. An honest soul.
印第安纳的星巴克根正苗红
大概在二、三十年前,就读过Bill Starbuck的自转。冷静地剖析自己的人生轨迹、职业生涯与心路历程。可以说是令人震颤地真诚,brutally honest。与其卡内基恩师Herb Simon的自转一样真诚而冷静。
Laying bare his very soul!那么早就志得意满的他,行文之间难免地自傲,Cocky!但每次对危机和挫折的描述,都是诉诸于自己的无知和决策失误。非常坦诚。
"WATCH WHERE YOU STEP!" OR INDIANA STARBUCK AMID THE PERILS OF ACADEME (RATED PG). by William H. Starbuck Published in A.Bedeian (ed.), Management Laureates, Volume 3; JAI Press, 1993, pages 63-110.
1934年出生的Starbuck,今年已然90高龄。1956年哈佛本科学数学。1964年卡内基理工博士毕业。曾任职普渡大学,约翰霍普金斯大学,并于1967年32岁时出任康奈尔大学正教授,曾担纲ASQ主编(1968-1971)。
SMU曾在1971年出价全美管理学教授最高的工资聘请他。由于各种原因未能成行。游访欧洲四年后回到美国,先后任教于Wisconsin Milwaukee(1974-1984)和NYU Stern(1985-2005)。退休后,现在U of Oregon的Lundquist商学院任驻院访问教授(Professor in Residence)。
美国心理学会院士(APA,1975)。美国管理学会院士(AOM,1986)。曾于1997年出任AOM主席。当时我在现场聆听过他的Presidential Address。
Bill的一个重要特点是只谈观点和想法,不在乎发表的地方。他的很多有见地的东西估计顶尖期刊也发不了。
根正苗红的星巴克老师但他根本不在乎。Simon,March,Cyert的学生。1967年就是正教授。
60余年学术生涯著述颇丰
他的第一篇论文,是在上学期间就与Cyert和March在Management Science上发的(1961)。他的最近一篇论文是2024年发在Scandinavian Journal of Managment的。
下面是一些我个人认为比较重要的工作:
组织的环境、危机与折腾
Bill在1970年代专注于对组织组织环境的研讨(Starbuck, 1976),对组织环境类型的分类与描述做出重要贡献。
另外一项重要贡献涉及组织对危机的反应(Starbuck,Greve & Hedberg,1978)。
借用中文“危机”对“危”和“机”的同时包容,针对欧洲三家公司应对危机的反应,他们认为,最高决策者如果能在危机中看到机会并积极热情地造势崔勇去追寻机会,这将会使得组织受益。
Facit, Ferry, and Kalmar Verkstad are three organizations that have rediscovered the truth of an ancient, Chinese insight. The Chinese character for crisis combines two simpler symbols, the symbol for danger and the one for opportunity. Crises are times of danger, but they are also times of opportunity.
Organizations can benefit from crises if they can perceive their opportunities and can marshal the courage and enthusiasm to pursue them. Whether organizations do this is largely up to their top managers. With little more than words, the top managers can shape ideological settings that reveal opportunities, nurture courage, and arouse enthusiasm. As Edmund Leach rightly observed: “The world is a representation of our language categories, not vice versa.”
在ASR(1983),Bill发了一遍极赋见地而且深具卡内基学派风范的妙文。组织作为行动制造者(Organization as Action Generators)。颇有Olsen,March,Cohen(1972)Garbage Can Model的神韵。组织两大功能,一个是解决问题(Problem Solving) ,一个是制造和产生行动(Action Generating)。大部分时间是制造行动,通常是自动自发地折腾。结果通常不乐观。大部分组织撑不了几年,就自废武功。他们往往葬身于对于自己的不断辩护以及只有自己才能够理解和接受的无端的期许中。
Most of the time, organizations generate actions unreflectively and nonadaptively. To justify their actions, organizations create problems, successes, threats and opportunities. These are ideological molecules that mix values, goals, expectations, perceptions, theories, plans, and symbols. The molecules form while people are result watching, guided by the beliefs that they should judge results good or bad, look for the causes of results, and propose needs for action. Because Organizations modify their behavior programs mainly in small increments that make sense to top managers, they change too little and inappropriately, and nearly all organizations disappear within a few years.
组织学习之研究贯穿学术生涯
除了对环境的关注,Bill对于组织设计倾注大量时间和精力。他在1981年合作主编的Handbook of Organizational Design,则被认为是其乃师Jim March主编的Handbook of Organizations (1965)之后的又一部集大成的、重要的OT文献汇编。
在这部文集中,收录了一篇Bo Hedberg关于组织“去学习”和忘却某些知识与记忆的文章。两位主编Paul Nystrom和Bill Starbuck(1984)也就同一话题行文论述。三位学者的早期贡献,引发了一些列的后续研究,探究组织如何卸载知识、消除记忆。
Learning, Unlearning, Relearning
Forgetting, Hibernating, Restoring (Reawakening)
It is always fascinating to read stuff written by Bill Starbuck. One interesting topic he has championed, along with colleagues Paul Nystrom and Bo Hedberg, is the so-called organizational unlearning.
While tons of works have been created that touch on the phenomenon of organizational learning, we are not sure whether what organizations have learned, or organizational knowledge to be exact, is indeed authentic or resembling the truth.
What if organizational learning or individual learning included, for the sake of argument, produces fake knowledge, mistaken causal relationships, and wrong conclusions?
We are not sure whether organizations improve their performance because of what they have learned and how they consequently act based on such learning.
Or do organizations simply like to believe that it is precisely their certain type of organizational learning (other than factors such as serendipity) that actually improves their performance?
How does an organization confirm or refute the learning-performance relationship through a higher order of learning?
How do they learn the truth of the relationship between their knowledge and performance, between their learning and performance, and between the exercise of their learning (through a certain type of action) and performance?
If they detect the wrongfulness, obsoleteness, or irrelevance of their current knowledge base, action routine, and capability repertoire, how do they engage in the "unlearning" or "forgetting" process?
Does unlearning simply mean the elimination or deletion of a certain type of knowledge, putting such knowledge in dormant forms, or replacing it with new knowledge?
Unlearning seems especially warranted in situations where the concerned knowledge is obsolete.
Yet, in a natural or neutral sense, unlearning could also happen in situations where concerned knowledge is truthful and useful.
Either obsolete or useful, knowledge could be "forgotten" because of various internal and external shocks, e.g., M&A, market entry or exit, change of CEOs, upheavals during and after power struggles, and retirement of key personnel, etc.
Unlearning or forgetting at the organizational level could also happen through the shrinkage or hiding of knowledge.
The repositories of knowledge, key personnel in particular, could simply choose to hide or withdraw the service of their beholden knowledge, causing temporary suspension or hibernation of organizational memories.
A resilient organization could somehow restore its suspended knowledge through reawakening. Or it can reacquire such knowledge if it remembers how to relearn it.
可以说,Bill对组织学习的兴趣贯穿其职业生涯始终。他单篇引用率最高的文章恰恰也是关于知识密集型组织的组织学习 (Starbuck,1992)。
Knowledge Intensive Firms (NIFs):
Knowledge is a stock of expertise, not a flow of information. Ironically, firms’ stocks of expertise come from the flows in complex input-output systems. Knowledge flows in through hiring, training and purchases of capital goods. Some knowledge gets manufactured internally, through research, invention and culture building. Knowledge flows out through personnel departures, imitated routines and sales of capital goods. Some knowledge becomes obsolete. Fluid knowledge solidifies when converted into capital goods or routines. The sequences of events resemble random walks, and the net outcomes are difficult to foresee.
Summary:Because everyone defines knowledge differently, discussions of KIFs evoke debates about proper definition. Such debates have led me
(a) to emphasize esoteric expertise instead of widely shared knowledge;
(b) to distinguish an expert from a professional and a knowledge-intensive firm from a professional firm;
(c) to differentiate a knowledge-intensive firm from an information-intensive firm, and
(d) to see knowledge as a property of physical capital, social capital, routines, and organizational cultures, as well as individual people.
Bill的文章,我经常在课堂上引用的,是他与Francis Milliken合作的Challenger: Fine-tunning the odds until something breaks (JOMS, 1988). 讲对挑战者号灾难的解读。Simply brilliant。这篇文章对比了工程师与管理者两大阵营及其背后代表的文化对于决策程序和内容的不同解读,以及组织对于每一次决策(是否发射航天飞机)与前期成功(或失败)之间的关系的不同理论假说。一个组织不断循规蹈矩地进行微调,直到发生一次大的灾难,可能是致命性的灾难。即使如此,组织也不一定就进行彻头彻尾,洗心革面的变化。
在后续的文章中(Baumard & Starbuck,2005),上述结论得到证实和强化:
Learning from repeated success makes future failure very likely.
As it is often unclear whether a sequence of events adds up to success or failure, organization members slant interpretations to their own benefit.
Managers tended to dismiss small failures that challenged [the firm’s] foundation premises.
Large failures supported even less learning than did small ones.
The higher the expectations for a project, the more reluctant were managers to question its ideological foundations.
Managers interpreted most small failures as demonstrating the foolishness of deviating from the company’s core beliefs.
Organizational learning, which appears so benign and desirable. can be dangerous or ineffective.
组织学习?组织不学习!尤其是不从失败中学习。
2005年,Starbuck在Org Science撰文,剖析不同级别期刊的质量。顶刊不一定所有文章都好。其他刊物也有好文章。顶刊发的水文章使他们得到不公正的和不应有的超乎其实际质量的影响。很多有价值的文章在发表前在多处数次被拒。
直到2023年,Bill还参与呼吁replication study对于博士生教育的重要性。
真实世界到底是个啥?
2004年,Bill在Organizational Study上发了一篇文章,“为什么我停止了去尝试理解所谓的真实世界” (Why I stopped trying to understand the real world)。像对待他的自传一样,我也是隔三差五地读一遍。下面是某些片段,以及我自己的理解和翻译。
摘要:
很久以前,我曾相信理性可以造就理解。我生活在真实的物理与社会环境中,我想理解社会现实。我想创立一个以数学模型、计算机模拟和系统的实验为基础的真正的行为科学。这些年来,诸多的经历挑战这些曾经相信的东西。我发现,理性不仅是一个极具欺骗性的工具,而且具有潜在风险。我发现研究结果具有极低的可靠性,有些学科数十年没有任何明显的进步,社会文化强烈地影响研究者对“什么构成有用的知识”的判断。我看到很多所谓的研究不过是用夸夸其谈的语言所包装的随机噪音。我所研究的社会系统根本不是什么现实,而是由观察者或者社会习俗所产生的任意性分类。我成了一个倡导者,倡导这样的研究:去改变某种状况,而不是仅仅观察自然发生的情形。
下面是一些精彩的观点。
Thus, I began to view laboratory experiments as exercises in the writing of instructions and the motivation of subjects, who would perform tasks having little significance outside the laboratory. I could elicit the behaviors I wanted if I wrote instructions that were clear enough and complete enough and I made sure the subjects understood and wanted to follow them.
实验室里的实验,主要在于问题写的如何,是否能让受众理解你想得到什么样的结果。跟实验室以外的真实世界没有多少关系。
In fact, laboratory experimentation seemed to bear a strong resemblance to computer simulation, although simulation also raises some different issues. When simulating, researchers try to write programs that correctly express their assumptions. Researchers face no motivational challenge: computers follow instructions precisely insofar as they can do so. The computers’ actions trace out implications of the programs, and if the programs accurately represent the researchers’ assumptions, the computers’ actions demonstrate the logical implications of the researchers’ assumptions. Thus, computer simulation is very similar to mathematical analysis. When one creates a mathematical model, one states a set of assumptions and then uses algebra to extract some implications of these assumptions. One can experiment with different assumptions until the model exhibits the properties one desires. Likewise, when one creates a computer simulation, one states a set of assumptions and the computer generates some implications of these assumptions. Since computers do nothing on their own initiative, simulation can only reveal the logical implications of what researchers believed before they created the simulations or what they assumed during the process of creating their models. These implications may surprise the model builders, and when they do, the model builders have to decide whether to change their assumptions. Because assumptions are always somewhat arbitrary, model builders can experiment with different assumptions until the computer generates the kinds of outputs they desire.
你只用去run,不出意外,你总会得出你要的结果。
难怪,有一位著名的中国学者做IB的,其学生会说,奇怪,significant的结果总是老师自己run出来的。
“当一个理论变得越来越真实的时候,它同时也跟它所代表的现实一样难以理解”。这是Bill在卡内基读书时针对他的一个博士同学的研究想到的。
这个事儿,我在大学时代也遇到过。我们班一个学生把三个会计指标分解成几十个分指标,他获得了优秀毕业论文奖。三个指标都说不清楚,你弄成几十个就更清楚精准了么?过分的做做。
1966年,Bill正在写一篇文章,他偶然发现数据分析的结论可以产生非常漂亮的理论解释,但跟他预先的假说完全不同。仔细检验,他才发现他的助手在数字录入时有系统性偏差!
Hence I had just spent weeks trying to make sense of data that contained large systematic errors. Moreover, I had been quite successful. In effect, I had constructed a logically satisfying theory based on random noise!
噪音也能产生系统的理论。
我经常说,你拿北大西门外肯德基Foot Traffic的流量变化做回归分析,都能解释每年北大人考上密西根大学博士的数量。而且R-Squared很可能大于0.3!
Hayek (1975: 92) observed: ‘It may indeed prove to be far the most difficult and not the least important task for human reason rationally to comprehend its own limitations.’ The realization did not come easily or quickly to me, but I gradually began to view rationality as a potentially dangerous scientific tool. Rationality arises from human physiology; our minds feel comfortable when we perceive relations as being logical. Our shared rationality helps you to understand what I am saying. But rationality also constrains our ability to understand because our judgments about whether we do understand involve rational assessments of our explanations: when our minds say we understand, we stop seeking for further understanding. Rationality also warps our perceptions, and it leads us to oversimplify (Faust 1984). Such distortions are probably consequences of the physiology of human nervous systems.
当我们获得貌似“理性”的答案时,我们的大脑感到舒服。不再追究。
But this scientific rationality generates logical contradictions, distorts our observations, and extrapolates incomplete knowledge to ridiculous extremes. We seek scientific rationality because it pleases our minds, but what gives our minds pleasure may not give us insight or useful knowledge.
给我们的大脑带来愉悦的,不一定给我们真正的洞见和知识。
I discovered the ambiguity surrounding human judgments about research findings when I became the editor of Administrative Science Quarterly in 1968. My predecessor bequeathed me a thigh-high stack of manuscripts that needed review. I was embarrassed that many authors had been waiting months for feedback, so I weeded out the topics that obviously did not suit the journal and then mailed manuscripts to hundreds of reviewers. After a few months, I had received more than 500 pairs of reviews, and I was amazed by the discrepancies among the reviews: only a small fraction of the reviewers agreed with each other as to whether a manuscript should be accepted for publication, returned to the author for revision, or rejected. Counting an ‘accept’ as 1, a ‘revise’ as 0, and a ‘reject’ as –1, I calculated a correlation of 0.12 between the recommendations of pairs of reviews. This correlation was so low that knowing what one reviewer had said about a manuscript would reveal almost nothing about what a second reviewer had said or would say. More generally, the reviewers exhibited almost no agreement about what constitutes good research, what findings are credible, what topics are interesting, or what methods are appropriate.
Bill刚出任ASQ主编的时候,每篇文章的两个Reviewers之间意见的Correlation只有0.12。
不知道现在三个主编,四十个副主编,400多个board member的SMJ这个指标是多少。
我们没有多少共识。而你把这叫做科学?!
Sometime in the late 1970s, I gave a talk that contrasted subjective perceptions with objective data. Afterward, Karl Weick asked me: ‘What if there are no objective data?’ I found this a puzzling, almost incomprehensible question. But I have great respect for Karl, and I began to experiment with interpreting supposedly ‘objective’ data as arising from mental or social processes.
一个人的感知是主观的。很多人的共同感知(即使是错误的)也是客观的存在。(a la Karl Weick). Individual perception might be bias. Collective perception is reality.
Choosing time series entirely at random, an economist would need only three trials on average to discover a correlation greater than 0.71. Even if the economist removed linear trends from series before correlating them, the economist would require only five trials on average to find a correlation greater than 0.71.
时间序列有问题。
I speculated that a similar phenomenon might occur with cross-sectional data.
横截面数据也有问题。
First, a few broad characteristics of people and social systems pervade psychological data — sex, age, intelligence, social class, income, education, or organization size. Such variables correlate with many behaviors and with each other. Second, researchers’ decisions about how to treat data can create correlations between variables. Third, so-called ‘samples’ are frequently not random, and many of them are complete subpopulations even though study after study has turned up evidence that people who live close together, who work together, or who socialize together tend to have more attitudes, beliefs, and behaviors in common than do people who are far apart physically and socially. Fourth, some studies obtain data from respondents at one time and through one method. By including items in a single questionnaire or interview, researchers suggest to respondents that they ought to see relationships among these items. Lastly, researchers are intelligent, observant people who have considerable life experience and who are living successful lives, so they are likely to have a sound intuitive understanding of people and of social systems. They are many times more likely to formulate hypotheses that are consistent with their intuitive understanding than ones that violate it; they are quite likely to investigate correlations and differences that deviate from zero; and they are less likely than chance would imply to observe correlations and differences near zero.
Thus, social science researchers should not expect correlations to center around zero, and statistical tests with a null hypothesis of no correlation are biased toward statistical significance.
统计分析大多是自嗨。
Reports by social scientists routinely overstate the generality of their observations. In particular, researchers often conceal the ambiguity in their observations by focusing on averages and using hypothesis tests about averages to convert ambiguities into apparently clear conclusions. Thus, instead of characterizing statistical findings by stating percentages such as ‘70 percent of adult men have brown hair,’ researchers state, test, and do not reject the hypothesis: ‘Men have brown hair.’ Then they describe such findings by saying ‘Men have brown hair’ as if the description describes everyone or every situation. The distribution of hair colors become a generalization. Much of the time, such generalizations have no bases beyond computed averages, that is, ‘An average man had brown hair.’ Since social phenomena often have overlapping frequency distributions, comparisons between averages may say nothing about specific instances. For example, the average height of a man exceeds the average height of a woman, but the heights of men and women have frequency distributions that overlap nearly 100 percent. What is the probability that Robert is taller than Roberta? Researchers use other language conventions as well to fabricate generality.
以偏概全。
Sometime in the late 1970s or early 1980s, I became aware of Pygmalion effects, in which predictions affect outcomes. Predictions may become either self-fulfilling or self-denying.
你想要什么,你会得到什么。
自我成就的预言。
研究多是钓鱼,而不是发现。
These effects weaken even further the usefulness of retrospective research. Although explaining the past may reassure us and comfort us, it may do little to help us influence our futures.
理解过去可以慰籍我们,但可能无助于我们应对未来。
These effects also confront us with the issue of what realities we wish to understand — the ones that did exist when we gathered the data or the ones that might exist after we attempt to exert influence.
现实是变化的。我们对所观察的现实是有影响的。海森堡测不准原理。
So now I advocate design in the belief that efforts to design better organizations can manufacture both greater understanding and better realities. The systems we are trying to understand are much more complex and flexible than prevalent research methods (rooted in spontaneous data and static analyses) are capable of comprehending.
The phenomena that I once called ‘realities’ are and ought to be partly products of our research because to obtain useful understanding of these phenomena, we must attempt to change them. There is also a possibility that we might help to create a better world.
Bill Starbuck一生中曾经尝试过用社会科学的几乎所有存世的方法论去研究管理问题。最终还是觉得隔靴搔痒。
在德国碰见一个医生,使他相信要用患者对治疗的反应来判定诊断和治疗的效果。
不管你是啥理论和疗法,最终是要看治病的效果,病症是否有所改善 (how patients respond to treatments)。
通过改变治疗,可以同时构建和检验我们对现实的理解。如果患者向好的方向发展,就证明治疗是有效的。
管理理论的功效亦是应该如此检验。
The links between symptoms and treatments are not the most important keys to finding effective treatments. Good doctors pay careful attention to how patients respond to treatments. If a patient gets better, current treatments are heading in the right direction. But, current treatments often do not work, or they produce side-effects that require correction. The model of symptoms-diagnoses-treatments ignores the feedback loop from treatments to symptoms, whereas this feedback loop is the most important factor. ‘Doctors should not take diagnoses seriously because strong expectations can keep them from noticing important reactions. Of course, over time, sequences of treatments and their effects produce evidence that may lead to valid diagnoses.’
周其仁说,定价就是猜。
我说,看病和管理也是猜。
对于有些人而言,研究连猜都不是,而是要证明自己的想法是对的。
欢迎来到真实世界。