Mapping the conflicts between idiographic and nomothetic subcultures

This article maps a controversy in network science over the last 15 years, dividing the field about the epistemic status of a central notion, scale-freeness. The article accounts for the two main disputes, in 2005 and in 2018, as they unfolded in academic publications and on social media. This article analyzes the conflict, and the reasons why it reignited in 2018, to the surprise of many. It is argued that (1) the concept of complex networks is shared by the distinct subcultures of theorists and experimentalists; and that (2) these subcultures have incompatible approaches to knowledge: nomothetic (scale-freeness is the sign of a universal law) and idiographic (scale-freeness is an empirical characterization). Following Galison, this article contends that network science is a trading zone where theorists and experimentalists can trade knowledge across the epistemic divide.

Readers from the field of Science and Technology Studies (STS) have encountered the network, and likely observed how disunified it can be: following Galison’s (1999: 157) metaphor, a laminated mess of “partially independent strata supporting one another”. As we argued in a previous article (Venturini et al., 2019), STS engage with the network in multiple ways. Actor-network theory (ANT) used the network as a metaphor to criticize “notions as diverse as institution, society, [and] nation-state” (Latour, 1999: 15; see also Law, 1999), although post-ANT moved away from networks. More recently, at the intersection of digital media studies and STS, some scholars use the network as a “social science apparatus” (Ruppert et al., 2013), often in a perspective of critical proximity, while reflecting on the network’s pervasive involvement in the infrastructure of datafied society. In this context, like other forms of Big Data, the network’s material-semiotic properties produce data-worlds, for instance, as network maps are interpreted as if they are self-evident (Bounegru et al., 2017Burrows and Savage, 2014Marres and Gerlitz, 2016). This movement is critical of Computational Social Science (Lazer et al., 2009), and concerned with the theories and models embodied in network tools (Rieder and Röhle, 2017), and their uses and abuses (Marres, 2012). STS scholars know how problematic the network can be, as the confusion it causes has been documented. Nevertheless, they did not create their own trouble with networks (van Geenen et al., forthcoming); they imported it.

STS scholars tend to presume that the disunification of the network happened when it was repurposed from the exact sciences, where the network was originally unified. But this initial state of unity never existed: The network has always been disputed.

There is no doubt that the raison d’être of the network is that relations deserve no less attention than substance; but even that point takes incompatible meanings in different contexts. Latour (2004: 63) warns STS scholars: “you should not confuse the network that is drawn by the description and the network that is used to make the description. … Surely you’d agree that drawing with a pencil is not the same thing as drawing the shape of a pencil. It’s the same with this ambiguous word, network.” The network can be either something we can know or a way to know things (this argument is developed in Venturini et al., 2019).

In sociology, Erikson (2013) accounts for a slightly different opposition, between the relationalist approach, which “rejects [the] essentialism” of the network, and the formalist approach based on a “structuralist interpretation”. Even within the formalist approach, prevalent in the natural sciences, the network has conflicting significations.

The transdisciplinary field network science (NS) emerged in the late 1990s under the joint patronage of social network analysis and the study of complex systems (Barabási, 2016Borgatti et al., 2009), focusing on a specific type of network: the complex network. From this crucible dominated by the formalist influence of physicists, statisticians, and quantitative sociologists (Hidalgo, 2016), old and new theories and methods radiate toward the social and natural sciences (Freeman, 2008Newman, 2018). Despite its success, NS witnesses a “clash of two cultures” (Keller, 2007) where fundamental claims on scale-freeness are disputed for more than a decade.

In this piece, I follow in the footsteps of Galison (1999) in Trading Zones, but focus on NS. Like him, I account for the frictions and exchanges between different subcultures. I find experimentalists whose ways and goals challenge those of theorists. I find that the (complex) network is a shared object that gets attributed different meanings in different contexts while allowing knowledge to circulate over epistemic gaps, and allowing the field to maintain its continuity. I argue that controversies in NS illustrate a familiar tension between digital methods and computational social science (Baya-Laffite and Benbouzid, 2017Masson, 2017), and I show that it extends the gap present in the exact sciences: the nomothetic/idiographic divide.

After introducing the key concepts of NS, and then my method, I account for the first dispute, in 2005, regarding which statistical distributions characterize complex networks. I analyze its commonly accepted framing as a tension between disciplines, and I explain why the subsequent efforts of multiple authors to defend a position of compromise did not put an end to the controversy. Then I introduce the notions of nomothetic and idiographic approaches to knowledge, and account for the second dispute, in 2018, concerning the experimental procedures capable of assessing the (alleged) pervasiveness of complex networks. Finally, I reframe the second dispute as an epistemic clash between theorists and experimentalists, the latter gradually opposing their own agenda to the former.

NS is not a firmly delineated discipline but a “highly interdisciplinary research area” (Börner et al., 2007) whose origin is disputed. For Borgatti et al. (2009), NS emanates from the older and well-established field of social network analysis. For Barabási (2016) and Hidalgo (2016), NS’s roots are in the study of complex systems in the natural sciences, and for Brandes et al. (2013), in a transdisciplinary mathematical apparatus.

Despite these different perspectives, all scholars acknowledge multiple points of origin, and the trouble it causes. Freeman (2008) accounts for two distinct communities within the field, and their mutual influence. Keller (20052007) argues against the “lure of universality” and points to a “clash of two cultures”. Erikson (2013: 219) writes that the field “often mixes two distinct theoretical frameworks, creating a logically inconsistent foundation.” Hidalgo (2016) wonders whether the field is “disconnected, fragmented, or united,” arguing that social and natural scientists do not understand each other because they have different goals. However, although there is consensus on the presence of tensions between disciplines, these authors disagree about which disciplines are clashing, and why.

I argue that the hypothesis of a disciplinary divide is not sufficient, as it fails to explain why the controversy reignites in 2018—which network scientists see as “surprising” (Holme, 2019). Before we get to that point, I must briefly present NS and several concepts necessary for understanding its contentious points.

NS focuses on the study of complex networks. Euler’s work in the 18th century gives birth to graph theory, and Moreno’s (1934) sociograms initiate the practice of visualizing social networks (Freeman, 2000). During the late 1990s, NS emerges as an interdisciplinary field around the object complex network (Barabási, 2016Borgatti et al., 2009Börner et al., 2007). The notion is invented almost simultaneously by two different teams: Watts and Strogatz (1998), who call it small-world and Barabási and Albert (1999), who call it scale-free. Both teams draw inspiration from the work of Erdős and Rényi (1960) on the random graph model, a probabilistic version of the network. It matches for the first time general properties of many empirical phenomena (see Barabási, 2002). The complex network has no precise definition, but nevertheless, refers to a special kind, as Wikipedia puts it, one “with non-trivial topological features”. In a nutshell, complex networks stand somewhere between order and disorder (see Figure 1).

Figure 1. Three types of networks, about 1000 nodes each. The complex network (center) is sometimes presented as an intermediary between order (left, a square lattice) and disorder (right, a random network).

The NS controversy is specifically about scale-freeness, a criterion Barabási and Albert use to characterize complex networks. In their model, as a network grows, its new nodes favor linking to the most connected nodes, a phenomenon known as preferential attachment. This phenomenon is similar to the Matthew effect (Merton, 1968), also known as “the rich get richer, and the poor get poorer.” The resulting network is called “scale-free” because it features the same properties across multiple scales. Specifically, the degree of the nodes, i.e., the number of neighbors, follows a power-law distribution (a statistical distribution itself scale-free). In practical terms, a few nodes get the most links (the “hubs”), while most nodes are poorly connected.

Pervasiveness is not a concept of NS, but a term I use to refer to one of its main rhetorical points: the empirical observation of the same phenomenon across many unrelated situations. Pervasiveness calls for an answer to an implicit question: Why do we observe the same thing in contexts that have a priori nothing in common? If your goal is to discover laws of nature, pervasiveness is a remarkable observation, because it suggests an underlying pattern.

Universality is a misleading but important concept, which one needs to understand at least superficially. The pervasiveness of the power law has been known in thermodynamics since the 1970s, under the pompous name of universality (Feigenbaum, 1976). Physicists explain this pervasiveness with the concept of self-organized criticality (Bak et al., 1987), which states that the power law naturally arises at the tipping point of a phase transition (called “criticality”) for mathematical reasons related to scale-freeness. Physicists such as Barabási translate this argument from thermodynamics to NS, based on three facts. (1) Phase transition and the Erdős–Renyi random graph model are instances of a process known as percolation (the emergence of a “strongly connected component” in a random graph is similar to the apparition of ice crystals in cooling water). (2) The power law and the complex network are pervasive. (3) The power law characterizes complex networks (via scale-freeness). From these premises, physicists deduce that the power law is the external sign of an underlying complex network. This idea took the name of universality because it extends Feigenbaum’s work, but it actually means something quite different. Feigenbaum’s universality is only an observation; NS’s universality states a law of nature.

I refer to this as the argument of universality, and the important point is to track how it changes. The argument appears under different forms in the publications of network scientists such as Barabási. In its strongest form, it affirms the existence of a universal law of complexity, explaining the pervasiveness of the power law and the complex network. The claim is supposed to be rooted in a mixture of graph theory and physics of complex systems, but experimental results gradually accumulated as evidence against it. The claim then degrades into a weaker form, stating the existence of unspecified laws, on the grounds of the pervasiveness of the properties of the complex network. Finally, in its weakest form, universality means only pervasiveness, which fits its colloquial meaning and Feigenbaum’s version. The strongest form claims the existence of a law; the weakest is only an empirical observation. The weak version requires only evidence, but the strong version also requires a theory. These different meanings must not be conflated because they have different validity conditions. Some researchers voice this criticism (Watts and Clauset as cited in Klarreich, 2018). As Barrat et al. (2008: 296) put it, “physicists know extremely well that the universality of some statistical laws is not to be confused with the ‘equivalence of systems’.” In the current state of the controversy, even the weakest form is contested.

Many researchers challenge the pervasiveness of the power law based on its resemblance to the log-normal distribution (a cousin of the famous normal distribution, aka the “bell curve”). Both are quite similar in practice, when it comes to matching empirical observations, but scale-freeness is related only to the power law, via the preferential attachment model. In this dispute, the empirical ground of scale-free pervasiveness is at stake. The heavy-tailed distribution is a notion introduced subsequently to subsume the log-normal and power-law distributions.

The controversy unfolds in two distinct moments, characterized by the accumulation of traces in the digital public space as network scientists debate on their blogs, Twitter, and arXiv, a platform hosting non-peer-reviewed pre-prints. The first dispute happens in 2005 and focuses on the resemblance between the log-normal and power-law distributions. The second happens in 2018 and challenges more directly the measure of scale-freeness.

My methodology is based on document analysis and draws on controversy mapping (Venturini, 2010). I “follow the actors” (Latour, 1987) to identify the points they see as controversial. I often refer to the researchers involved in the debates as actors or network scientists, for simplicity and because they all engage with NS, although they do not always qualify themselves as such.

The controversy has no clear boundaries. The NS concepts and methods are challenged and discussed over its two decades of existence, as for any scientific field. In this article, I specifically investigate the contentious points whose resolution is contested by actors: when they also disagree about their disagreement.

I coded a selection of 40 academic and non-academic publications related to the controversy. I explored the literature with snowballing sampling before reducing the corpus to a set of key documents. I selected explicit refutations, refuted articles, commentary, major publications citing the disputes (e.g., attempts at concluding them), and major references. See Table 1 for the list and Figure 3 for a contextualized visualization.

Figures 2 and 3 help familiarize readers with the corpus. For the sake of clarity, I classified the documents into four groups. The initial references in 1999–2000 show early signs of the controversy. I gathered publications explicitly referring to each academic feud in the 2005 dispute and the 2018 dispute. The ongoing discussion group contains contributions that do not refer explicitly to either feud.

Figure 2. Network of citations between coded publications over time. The node size indicates the number of times cited in the corpus.

The evolution of citations (Figure 2) shows that the 2005 controversy is rarely cited by subsequent publications. The 2018 controversy largely builds on the 2007–2016 academic discussion.

Placing the documents and their type in chronological order (Figure 3) shows that two waves of reactions happen after the publication of a polemic piece: self-published web commentary in the following weeks and then, academic articles featuring refutations in the following years.

Figure 3. Corpus of 40 documents coded for this study, in chronological order. It includes type of document, direct and indirect refutations, and mentions of other documents in the corpus.

I reduced my qualitative exploration of this corpus to a coding of 12 different arguments framing the controversy, expressing a specific perspective on scale-freeness and universality, or taking an epistemic posture. Each of the 120 data points consists of a brief quote exemplifying a given argument featured in a given publication. The data is available as supplemental material.

I further reduced this corpus by focusing on the 13 most recurrent actors: Alderson, Doyle, and Willinger (who always published together in this corpus), Amaral and Malmgren (idem), Barabási, Barzel, Clauset, Holme, Mitzenmacher, Shalizi, Vespignani, and Watts. Figure 4 shows the co-publication groups in the corpus. These 13 key actors effectively form 10 groups or single actors. I picked this number to ensure that each presented enough material for the analysis.

Figure 4. Authors and their publications in the corpus. Note: it does not include authors quoted in a publication.

I compiled the statements of the key actors by period in a table available as supplemental material. It comprises 79 quotes. Table 2 presents each argument with two examples of quotes. Figure 5 features which key actor states which argument during which period.

Figure 5. Statements by key actors per period: before and during the 2005 controversy, between the two controversies, and during the 2018 controversy. Corresponding quotes are available as supplemental material.

Before engaging with the matter of the two disputes, I summarize how the actors of the controversy frame it themselves. Their recurrent arguments are the following:

  1. A disciplinary divide
  2. A matter of how public relations are enacted by certain researchers
  3. An issue of conceptual ambiguity

 

Argument 1

“For Vespignani, the debate illustrates a gulf between the mindsets of physicists and statisticians” (Klarreich, 2018). A similar opinion is voiced by half of the key actors (see Figure 4), although the problem is rarely situated precisely. Holme (2018b) characterizes the divide as “emergentists” who “didn’t lose faith in the buzzwords of the nineties’ complexity science” such as “universality” versus “statisticalists” for whom “scale-freeness is not scientifically important if it is not testable.” Keller (2007) similarly points at the goal of seeking laws of nature: “biologists have been little concerned about whether their findings might achieve the status of a law. … Physical scientists, however, come from a different tradition—one in which the search for universal laws has taken high priority.” However, in contrast to the hypothesis of a disciplinary divide, Clauset insists instead on the “importance of … good statistics” regardless of the discipline (Keller, 2005).

Argument 2

Barabási’s critiques condemn his “grand claims of universality” and his “apparent arrogance” (Clauset, 2005b). The former symmetrically suggests that the latter exaggerate their claims “to get maximal attention” (Barabási, 2018). Barabási’s fiercest opponents see a scientific issue in his promotion of what they consider disproved claims: “Garbage In, Gospel Out” (Willinger et al., 2009: 598). However, Barabási always responds to the refutation of his articles (see the red arrows in Figure 3) and is supported by respected researchers (Holme, 2019). There is no consensus on the disproval of Barabási’s claims, and some even find that “all his talk about networks is good for computer science in general” (Venkatasubramanian, 2005).

Argument 3

In 2018, Clauset and Holme mention conceptual ambiguity as a cause of the controversy. More generally, many authors acknowledge the absence of a clear definition for important concepts, occasionally explained by the lack of maturity of NS as a field (Vespignani in Klarreich, 2018).

The idea of a disciplinary divide is popular after the first dispute (Keller, 2007). It may explain why the academic activity between 2007 and 2016 aims at filling a knowledge gap (most notably in Barrat et al., 2008Clauset et al., 2009Perc, 2014Stumpf and Porter, 2012). The same actors were surprised when the controversy reignited in 2018, which suggests that they believed they had ended it. Klarreich (2018) quotes Vespignani: “the important question is not whether a network is precisely scale-free but whether it has a heavy tail … I thought the community was agreeing on that.” Similarly, Holme (2019) states: “I, and (I believe) most colleagues, were following the principle that ‘knowledge of whether or not a distribution is heavy-tailed is far more important than whether it can be fit using a power law’ … Thus, it was surprising that the scale-free debate would flare up again.”

The dispute is about the claim that observed power laws are log-normal distributions. Some argue a flaw exists in the statistical procedure used to identify power laws, thus challenging their pervasiveness. The dispute unfolds as follows.

In May 2005, Barabási (2005) publishes an article on the presence of “heavy tails in human dynamics”.

In October, Stouffer et al. (2005: 1) publish on the online repository arXiv1 (no peer review) a refutation of Barabási’s claim that “the dynamics of a number of human activities are scale-free.” They argue that “the reported power-law distributions are solely an artifact of the analysis of the empirical data.”

In the days following the release of Stouffer et al.’s pre-print, other researchers comment on the dispute on their respective blogs (Clauset, 2005aShalizi, 2005Venkatasubramanian, 2005). Clauset (2005a) frames it as a matter of “good empirical research” and summarizes the main sticking point as such: “[Stouffer et al.] eliminate the power law as a model, and instead show that the distributions are better described by a log-normal distribution.”

Venkatasubramanian includes Mitzenmacher in the controversy. Venkatasubramanian interviews Mitzenmacher, defending the impossibility of contrasting the two distributions in practice, as he had established in a previous publication (Mitzenmacher, 2004).

In November, Barabási et al. (2005: 2) publish a response to Stouffer et al.’s rebuttal, on arXiv as well. First, they acknowledge that both distributions match observations, but add that Stouffer et al.’s claim stems from a misunderstanding of the original data set. Second, they argue that their work “fails to propose an alternative mechanism indicating that a lognormal distribution could also emerge,” and therefore, “is a mere exercise in statistics, one that has little hope to be conclusive” on a larger data set. From that point in time, Mitzenmacher’s argument on the relative irrelevance to contrast the log-normal and power-law distributions meets consensus in the community.

In a second blog post, Clauset (2005b) comments on the dispute, and like Mitzenmacher, reframes it to account for the role of Barabási’s rhetoric in the reception of his work:

Barabasi is not one to shy away from grand claims of universality. [His work] does not show causality, nor does it provide falsifiable hypotheses by which it could be invalidated. Barabasi’s work in this case is suggestive but not explanatory, and should be judged accordingly. To me, it seems that the contention over the result derives partly from the overstatement of its generality, i.e., the authors claims their model to be explanatory.

Clauset criticizes the strong version of the argument of universality, and points at a specific problem with Barabási’s laws-seeking approach: It is not falsifiable. However, note that Clauset does not contest the method or its result, qualified as “suggestive” but empirically valid if “judged accordingly”.

After a second pre-print from Stouffer et al. (2006) refining their argument, the dispute moves to the classic academic space of peer-reviewed publications. At this point, most authors acknowledge that “whether or not a distribution is heavy-tailed is far more important than whether it can be fit using a power law” (Holme, 2019).

A series of publications contest specific claims on scale-freeness. Keller (20052007) refines her epistemic critique of universality. Malmgren et al. (20082009) show that heavy-tailed distributions in complex networks are not necessarily caused by preferential attachment and propose an alternative model. Muchnik et al. (2013) support the point. Alderson et al. (2019) keep refuting the scale-free model in the case of internet, a critique they formulate during every phase of the controversy, but without mentioning either dispute (Doyle et al., 2005Willinger et al., 20042009).

Clauset solidifies his position with the help of Shalizi and Newman (Clauset et al., 2009). They develop a rigorous framework for testing scale-freeness, confirming certain empirical observations of power laws and ruling out others.

Stumpf and Porter (2012) also publish an article presented in 2018 as the final point on the controversy.2 “The most productive use of power laws in the real world will … come from recognizing their ubiquity … rather than from imbuing them with a vague and mistakenly mystical sense of universality” (666).

Despite critiques, Barabási keeps supporting the argument of universality, albeit under a weaker form (Barzel and Barabási, 2013). The situation leads Pachter (2014) to comment on his blog that “Barabási’s ‘work’ is a regular feature in the journals Nature and Science despite the fact that many eminent scientists keep demonstrating that the network emperor has no clothes,” echoing the recurrent claim that the pervasiveness of power laws is a “myth” (Lima-Mendez and Van Helden, 2009Shalizi, 2010Willinger et al., 2009).

Barabási publishes the book Network Science (2016) commenting on the first dispute that “as long as there is empirical data to be fitted, the debate surrounding the best fit will never die out” (151) and standing behind the pervasiveness of the power law despite a “decade-long crusade against network science” (16).

The second dispute makes visible why the disagreement persists. Before I get to this point, I must establish how the controversy differs from its depiction by actors, so that we can see beyond the hypothesis of a disciplinary divide.

I account for the position of key actors on scale-freeness across the two disputes by tracking the four following statements, systematically coded for the 40 publications of the corpus:

  1. We CAN contrast the power-law and log-normal distribution in empirical situations.
  2. We CANNOT contrast them.
  3. Scale-freeness requires a better statistical characterization.
  4. A basic test of scale-freeness is useful to science even if it is not perfectly rigorous.

 

I use these statements as an analytical grid of argumentative positions that key actors may or may not occupy during each dispute. I show this grid visually for ease of understanding. As only the first two statements are mutually exclusive, I juxtapose only them. According to the key actors’ narrative, “statisticians” defend the possibility to contrast distributions, and “physicists” argue that it does not matter in empirical situations. I position the statements accordingly, as illustrated in Figure 6.

Figure 6. Positions in the narrative of a disciplinary divide. Dashed boxes represent the positions contingent to each side.
The beginning of the 2005 dispute follows the narrative of the physicist/statistician divide. Before the dispute, the empirical issues of contrasting log-normal and power-law distributions are not common knowledge (Figure 7(a)). The “statistician” critique (Stouffer et al., 2005) defends the necessity of better procedures (Figure 7(b)). The “physicists’” response minimizes the importance of the log-normal distribution in empirical situations (Venkatasubramanian, 2005; drawing on Mitzenmacher, 2004), and defends the relevance of modeling despite its apparent statistical weakness (Barabási et al., 2005Figure 7(c)). The argument hinges on the fact that the power law has a model (preferential attachment) while the log-normal distribution does not. At this point of the first dispute, the hypothesis of a disciplinary divide explains well how the controversy unfolds.
Figure 7. How the first dispute unfolds, according to the hypothesis of a disciplinary divide. Dashed boxes represent positions not considered at that point.

This is how actors explain the closing of the disciplinary gap, disregarding their surprise that it reignites later. Vespignagni states, “the important question is not whether a network is precisely scale-free but whether it has a heavy tail” (Klarreich, 2018). Mitzenmacher’s point quickly prevails, and both “sides” acknowledge the practical issues of contrasting the debated distributions. Then, to cite Clauset (2005a), one can ask for “good tools and good statistics” while acknowledging the benefit of “suggestive” work (Clauset, 2005b). Coexistence between “physicists” and “statisticians” is, after all, possible (Figure 8(a)). As my coding shows (Figure 5), Clauset, Shalizi, and Vespignani acknowledge both positions in their publications (Barrat et al., 2008Clauset et al., 2009Shalizi, 2010; see Figure 8(b)). This seems a conscious effort at conciliation, as Vespignani later explains that despite “a gulf between the mindsets of physicists and statisticians … both … have valuable perspectives” (Klarreich, 2018). This reasonable consensus point is summarized by Stumpf and Porter (2012: 666) as such: “knowledge of whether or not a distribution is heavy-tailed is far more important than whether it can be fit using a power law.” Holme (2019) finally comments on it: “I, and (I believe) most colleagues, were following [this] principle.”

Figure 8. How the controversy was supposed to close, according to the hypothesis of a disciplinary divide.
Why the controversy reignites in 2018 despite this considerable reconciliation effort is worth investigating. I visualize the coding of key actors (Figure 5) using this grid in Figure 9, for convenience. Notice Clauset’s trajectory, as he follows the sequence in reverse: He starts in the “happily ever after” position (Figure 8(b)) in 2005 and moves to the typical “statistician” position (Figure 6(a)) in 2018. The second dispute is triggered by a pre-print by Broido and Clauset (2018).
Figure 9. Statements of key actors during three different periods. Some annotation highlights the difference with the precedent period.

Clauset actively contributes to reaching consensus until the second dispute. He refrains from framing the debate as a disciplinary matter and co-publishes with statisticians and physicists. His publications are respected and show an in-depth understanding of Barabási’s position. “Power-law Distributions in Empirical Data” (Clauset et al., 2009) is the second most-cited publication of the corpus, cited by Barabási (20162018) twice. However, Clauset (2005b) also requires “falsifiable hypotheses”. It is this imperative of falsifiability, and not a disciplinary divide, that grounds his reopening of the controversy.

To understand how a criterion as consensual as falsifiability can become controversial, I need to introduce two approaches to knowledge that renders visible the epistemological commitment of network scientists, notably during the second dispute.

The type of knowledge Barabási produces determines his position in the controversy. His approach postulates the existence of universal structures: Phenomena obey laws, and the purpose of science is to find them. This position is what the philosopher Windelband introduced as the nomothetic approach to knowledge (Lindlof, 2008; see also Munk, 2019), from the Greek “proposition of the law”. From that perspective, regularities are how nature tells us its structure, and science tries to understand that language. Barabási is the textbook example of the nomothetic mind. See, for instance, how he presents the power law in his best-seller book Linked (2002: 77, emphasis added):

Nature normally hates power laws. In ordinary systems all quantities follow bell curves, and correlations decay rapidly, obeying exponential laws. But all that changes if the system is forced to undergo a phase transition. Then power laws emerge – nature’s unmistakable sign that chaos is departing in favor of order. The theory of phase transitions told us loud and clear that the road from disorder to order is maintained by the powerful forces of self-organization and is paved by power laws. It told us that power laws are not just another way of characterizing a system’s behavior. They are the patent signatures of self-organization in complex systems.

Windelband opposes the nomothetic to the idiographic approach to knowledge. This other perspective focuses on accounting for the specifics of phenomena. It is considered typical of the humanities, and the usual idiographic use of networks is to describe (see e.g. Grandjean, 2016Venturini et al., 2018). Some researchers such as Clauset adopt this approach, as they try to stabilize metrics capable of characterizing scale-free phenomena. Their goal is to improve the descriptive process regardless of general implications. Their phenomena may well disobey laws—if that is what experiments find. This independence of generality is what makes their position idiographic.

Galison (1999) remarks that “[e]ach subculture has its own rhythms of change, each has its own standards of demonstration, and each is embedded differently in the wider culture of institutions, practices, inventions, and ideas” (143). He observes how “theorists trade experimental predictions for experimentalists’ results” (146). In the second dispute, I find similar interactions between nomothetic theorists, such as Barabási, and the idiographic practice of instrumentalists, such as Clauset.

The second dispute focuses directly on scale-freeness. It extends the first one, with similar arguments and involves some of the same actors, but unfolds differently.

In 1999, Barabási and Albert publish two famous articles. In the first one (Albert et al., 1999), they study the structure of the World Wide Web and measure that the probabilities of a page to cite or to be cited “follow a power-law over several orders of magnitude” (130). In the second (Barabási and Albert, 1999), they introduce the concept of preferential attachment and measure multiple data sets to conclude that “large networks self-organize into a scale-free state” (510). This article, considered a pillar of network science, states for the first time the disputed point: that scale-freeness is pervasive.

In January 2018, Broido and Clauset publish on arXiv the pre-print “Scale-free Networks Are Rare”. They retrace the origin and circulation of Barabási and Albert’s original statement:

Across scientific domains and different types of networks, it is common to encounter the claim that most or all real-world networks are scale free. The precise details of this claim vary across the literature … Some versions of this “scale-free hypothesis” make the requirements stronger … Other versions make them weaker. (1)

To challenge the “scale-free hypothesis,” they “carry out a broad test of the universality of scale-free networks by applying state-of-the-art statistical methods to a large and diverse corpus of real-world networks” (2). They conclude that “genuinely scale-free networks are remarkably rare, and scale-free structure is not a universal” (7). The pre-print triggers an instant reaction on the web, notably on Twitter.3

In the following days, Holme (2018a) comments in a blog post that “if we could rewrite history and redefine power-laws as ‘something that follows a straight line in a log-log histogram if you squint from the side of a computer screen’, then they would, for sure, be abundant” (the reconciliation position, see Figure 8(b)).

In the following days, Barzel (2018) publishes a short piece online aligned with Holme’s position: “the meaningfulness of scale-free supersedes its detailed empirical accurateness.” He finds the discussion “roughly aligned along a disciplinary divide”.

One month after Broido and Clauset’s pre-print, Klarreich (2018) publishes a piece in Quanta Magazine, presenting and explaining the controversy at length and with great clarity. Despite the absence of peer review, Klarreich writes that “network scientists agree, by and large, that the article’s analysis is statistically sound.” Like Holme and Barzel, she sees two camps and the disciplinary influence of physics. On one side, “supporters of the scale-free viewpoint, many of whom came to network science by way of physics, argue that scale-freeness is intended as an idealized model.” On the other, “critics object that terms like ‘scale-free’ and ‘heavy-tailed’ are bandied about in the network science literature in such vague and inconsistent ways as to make the subject’s central claims unfalsifiable.”

Two months after the publication of the pre-print, Barabási (2018) issues a rebuttal on his website. This self-published piece is more didactic and more polemic than a typical academic publication. For him, Broido and Clauset fail to recognize the scale-free mechanism “[b]y insisting to fit a pure power law to every network, and ignoring what the theory predicts for any of them.” Barabási also challenges the relevance of their procedure: “And the real surprise? Even the exact model of scale-free networks, following a pure power law, fails their test. … The true failure is their methodology: It fails to detect that the gold standard is scale-free.” He concludes that “the study is oblivious to 18 years of knowledge accumulated in network science.” His response ignores the question of falsifiability, and challenges the relevance of the measurement procedure.

In November, Holme (2018b) exposes “the state of affairs” in another blog post: “Simply speaking, there are two camps: those seeing scale-freeness as an emergent property, and those seeing it as a statistical property.” On one side, “emergentists … view scale-free networks essentially as outlined in Barabási and Albert’s Emergence of scaling in random networks, … [and] didn’t lose faith in the buzzwords of the nineties’ complexity science: universality, fractals, self-similarity, criticality, emergence.” On the other, “statisticalists [argue] that scale-freeness is not scientifically important if it is not testable … [and] are on top of the latest data science trends.” He concludes that “[t]he disappointing realization is that whether scale-free networks are rare [or not] is really a choice that needs to be argued by words”—a nice example of Kuhn’s (1962) “paradigm incommensurability”.

On 4 March 2019, Nature Communication publishes two articles: Broido and Clauset’s (2019) revised article “Scale-free Networks Are Rare” and Holme’s (2019) complementary article, “Rare and Everywhere: Perspectives on Scale-free Networks.”

Broido and Clauset’s (2019) revised article is substantially the same, retaining the original data and analysis, making the argument clearer and more solid. They clarify that their definition of scale-freeness is not based on preferential attachment and add a “robustness analysis” section implicitly addressing Barabási’s technical and conceptual concerns.

Holme (2019) summarizes the controversy and reflects on it. For him, the “controversial topic” is to know whether “scale-free networks rare or universal” and “important or not”. He argues that “in the Platonic realm of simple mechanistic models, … the concepts of emergence, universality and scale-freeness are well-defined and clear. However, in the real world, … they become blurry. … Now we have one camp … thinking of scale-free networks as ideal objects …, and another seeing them as concrete objects belonging to the real world.” He suggests finding consensus by acknowledging the legitimacy of studying complexity-related notions, such as scale-freeness, and the need to build a better statistical framework. He remarks finally that “it often feels like the topic of scale-free networks transcends science.”

The argument of universality is divisive. My coding identified six publications stating it (four co-signed by Barabási), and nine publications criticizing it. Only one does both (Barrat et al., 2008), making a similar point as Holme: Universality is a defined concept “related to the identification of general classes of complex networks” (76), but “all knowledgeable physicists would agree” that “the quest for universal laws … cannot apply to network science” (296). The other publications mentioning universality pick a side.

Although a formal critique of universality for complex networks exists since at least 2005 (Keller), Barzel and Barabási (2013) disregard it. They acknowledge that “a mathematical framework that uncovers the universal properties of [complex networks] continues to elude us” (673) but do not question the idea itself. Network Science (Barabási, 2016) adopts the same position. This lack of dialogue suggests incompatible worldviews.

Keller develops the most precise argument against universality. She identifies a “clash of two cultures” with the “tradition … in which the search for universal laws has taken high priority,” i.e., the nomothetic approach (2007). For her, the argument of universality is invalid, and successful only for reasons external to the criteria for scientific truth. She explains the “faith in … ‘the unique and deep meaning of power laws’” by “the rapid growth of the sector of the publishing industry” and “the remarkably effective uses of language employed in presenting these ideas” (2005: 1067). However, as power laws “are not as ubiquitous as was thought,” and it “tells us nothing about the mechanisms that give rise to them,” the claim “that scale-free networks are a ‘universal architecture’ … are problematic” (2007).

The “clash” is not about the universality of scale-freeness; this is just a disagreement. The clash is about the scientific status of the disagreement. Some consider universality disproved; others consider it legitimate. For the first, “the network emperor has no clothes” (Pachter, 2014); the second disregard the critique. This controversy is a disagreement about a disagreement, and the parties lack common ground to settle their contention. Klarreich (2018) comments on Broido and Clauset’s (2018) pre-print, that it “seems to be functioning like a Rorschach test, in which both proponents and critics of the scale-free paradigm see what they already believed to be true.” Clauset and Barabási do not see scale-freeness from the same perspective.

Clauset and Watts focus on falsifiability, in the classic Popperian sense. It shows that they see universality as a hypothesis, an evaluable statement. Clauset remarks early that Barabási’s work does not “provide falsifiable hypotheses” and later, that the “scale-free hypothesis” (Broido and Clauset, 2018) “sounds like a nonfalsifiable hypothesis” (Klarreich, 2018). Watts criticizes that “the claim just sort of slowly morphs to conform to all the evidence, while still maintaining its brand label surprise factor” (Klarreich, 2018).

In contrast, Barzel and Barabási’s (2013) nomothetic approach proceeds by postulating universality, for instance, when they seek “a mathematical framework that uncovers the universal properties of [complex networks]”. Barabási’s position on modeling also shows this. I coded the argument that empirical measures only make sense when backed by a model: Only Barabási states it during each period (see Figure 5). He sees attempts to measure scale-freeness without a model as “a mere exercise in statistics” (Barabási et al., 2005), because for him, the meaning of the findings derives from the postulate of universality embodied in the model. The postulate is part of his way to know.

The different sides of the dispute implicitly disagree on the validity conditions applicable to universality. For Clauset and Watts, universality is a hypothesis that can be proven or disproven. For Barzel and Barabási, it is an epistemic device.

The nomothetic approach to knowledge postulates the existence of universal laws, but it does not state their empirical reality. Keller’s (2005) critique that it is a “faith” is a misinterpretation. The postulate of universality determines an experimental program: which experiments to conduct and how to interpret them. But they can fail. Universality may “elude us” (Barzel and Barabási, 2013). Even so, it drives the scientific process. Questioning the postulate of universality is questioning the entire nomothetic approach. As the latter has been undeniably successful in physics, it confers on universality a remarkably solid foundation. This may explain why Holme and Vespignani defend universality.

Conversely, Keller does not acknowledge universality as a constituent of the nomothetic epistemology. She presents the physicist’s “traditional holy grail of universal ‘laws’” (2007) as if it were a horizon, while it is, instead, part of their way. Asking Barabási to abandon his “faith in, as he says, ‘the unique and deep meaning of power laws’” (2005: 1066) can be only as successful as asking a physicist to reprove physics.

Keller’s position is idiographic, as characterized by Windelband. She opposes Barabási’s universalism with the importance of the specific. She defends the relevance of studying phenomena in their uniqueness and demands that we ponder “when it is useful to simplify, to generalize, to search for unifying principles, and when it is not” (2007).

Contrary to Keller, Clauset seems to fully understand the nomothetic approach, and to acknowledge it. As we have seen, he defends Barabási’s early “apparent arrogance” and legitimacy to publish a finding “that is merely suggestive so long as it is honestly made, diligently investigated and embodies a compelling and plausible story.” However, Clauset (2005b) also demands “falsifiable hypotheses by which [Barabási’s claims] could be invalidated.” His refutation in 2018 does not touch upon the postulate of universality, but its empirical validity conditions.

The nomothetic quest for universal laws requires by nature changing theory in the face of evidence: The better model replaces the worse. Evidence always has some degree of looseness in this context, as laws are only as good as the experiments. Barabási opposes this argument to Clauset, insisting that his “findings do not undermine the idea that scale-freeness underlies many or most complex networks” (Klarreich, 2018). But “Clauset doesn’t find this analogy convincing” and replies that “it is reasonable to believe a fundamental phenomenon would require less customized detective work” (Klarreich, 2018). Clauset et al. (2009) defend a decade-old agenda of assessing the experimental validity of scale-freeness.

Barabási’s experimental program is derived from theory (it is, nevertheless, empirical). His pioneering work on scale-freeness (Barabási and Albert, 1999) prompted multiple authors to seek it in various contexts. The subsequent wave of empirical findings reinforced his claim for the pervasiveness of complex networks (list in e.g. Lima-Mendez and Van Helden, 2009). As Galison (1999) observed in another context, theorists (Barabási) “trade experimental predictions” (pervasiveness) “for experimentalists’ results” (146).

The experimental program gradually affirmed by Clauset is that of an experimentalist. It leaves behind the model-based goals of theorists and focuses instead on experimental validity—Popperian falsifiability. By reclaiming the right to invalidate theory by experiment, Clauset challenged Barabási’s nomothetic program and set foot on idiographic ground.

Galison (1999: 146) makes relevant remarks on the situation:

the two subcultures may altogether disagree about the implications of the information exchanged or its epistemic status. For example, … theorists may predict the existence of an entity with profound conviction because it is inextricably tied to central tenets of their practice … The experimentalist may receive the prediction as something quite different, perhaps as no more than another curious hypothesis to try out on the next run of the data-analysis program. But despite these sharp differences, it is striking that there is a context within which there is a great deal of consensus. In this trading zone, phenomena are discussed by both sides. … It is the existence of such trading zones, and the highly constrained negotiations that proceed within them, that bind the otherwise disparate subcultures together.

The controversy reveals tensions between the agendas of different subcultures where distinct approaches to knowledge prevail. Galison suggests that such epistemic rifts are the norm rather than the exception. These epistemic tensions existed before the controversy, and I see no reason to doubt they can survive it.

Barzel (2018), Vespignani, Watts (Klarreich, 2018), and Holme (2019) acknowledge the importance of Broido and Clauset’s (20182019) work. Of course, it promises better validity standards for the field. But more importantly, by declaring a new experimental program, their work shows the way out of the long-lasting controversy.

I suggest a plausible interpretation of the controversy. The accumulation of evidence against the universality of scale-freeness weakened Barabási’s empirical program. However, most actors still agree on the pervasiveness of complex networks—whatever that means. As Broido and Clauset’s program is resilient to the critique of universality, actors may adopt it to design their own experiments. I see theorists and experimentalists as the two legs of the field. When the theoretical leg weakened, the weight naturally shifted to the experimental leg. The controversy made visible an otherwise latent difference of perspective.

In this article, I account for a long-lasting controversy in network science on the nature of scale-freeness. The first dispute in 2005 focuses on the similarity of the power-law and log-normal distributions; the second, in 2018, on the statistical characterization of scale-freeness. Many network scientists have commented on the situation, generally framing it as a conflict between physicists and statisticians, and assuming that the controversy had been settled around 2010. Thus, they were surprised by its resurgence.

I propose another interpretation that better accounts for the resilience of the controversy. The core disagreement lies in the epistemic status of scale-freeness: a sign of a universal law for some, characterization of empirical phenomena for others. Like Galison (1999), I observe epistemic subcultures with different approaches to knowledge. Theorists elaborate models and predictions. Experimentalists stabilize the procedures necessary to account for empirical phenomena. These subcultures do not have the same epistemic perspectives, or the same goals.

We can describe these stances with what the philosopher Windelband introduced as nomothetic and idiographic approaches to knowledge (Lindlof, 2008). Theorists seek universal laws whose existence they postulate. Experimentalists favor accurate and local descriptions of phenomena. In that sense, network science is a trading zone where theorists trade predictions for experimentalists’ results.

I argue that the controversy was caused by the rise of an experimental program challenging the theory-driven approach dominant in the field. Theorists (e.g., Barabási) insist that experiments on scale-freeness draw their validity only from a model. Experimentalists (e.g., Clauset) defend the measurement of scale-freeness with model-independent methods. This disagreement on the epistemic status of scale-freeness existed before the dispute, but became visible as each party argued for their own program. The recent endorsement of Clauset’s endeavor by theorists (e.g., Holme) suggests a shift in the field in favor of the experimentalist perspective.

The dynamic of network science offers several lessons in the era of digitization of the social sciences and humanities. Commonplace is the defiance of some scholars regarding the methodological imperialism of natural sciences, or what they perceive as such. The critique has merit, but it might be misplaced in the case of network science, as epistemic gaps do not run along disciplinary lines. Nomothetic positions in the social sciences cause trouble, but powerful idiographic positions also exist within the natural sciences. When something like the network circulates inside science, it moves with an assemblage of theories, experimental results, and material-semiotic practices, including tools. This assemblage is equivocal by nature, and can be received in different ways. Freeman (2008) documented how network centrality circulated from social network analysis to network science and to physics, supporting idiographic practices in physics as it supports nomothetic practices in sociology.

Transdisciplinary fields like digital methods and computational social science are natural zones of dialogue where network practices and predictive models trade knowledge despite their different epistemic perspectives. As this hybridity is sometimes misconstrued as a threat to idiographic practices, I find it useful to remind that idiographic practices exist in the natural sciences, that influence can go both ways, and that we can collaborate without abdicating our respective approaches to knowledge—whatever they are.

This article has been published from the source link without modifications to the text. Only the headline has beeen changed.

Source link