A Speculative Post on the Idea of Algorithmic Authority

Jack Balkin invited me to be on a panel yesterday at Yale’s Information Society Project conference, Journalism & The New Media Ecology, and I used my remarks to observe that one of the things up for grabs in the current news environment is the nature of authority. In particular, I noted that people trust new classes of aggregators and filters, whether Google or Twitter or Wikipedia (in its ‘breaking news’ mode.)

I called this tendency algorithmic authority. I hadn’t used that phrase before yesterday, so it’s not well worked out (and I didn’t coin it — as Jeff Jarvis noted at the time, Google lists a hundred or so previous occurrences.) There’s a lot to be said on the subject, but as a placeholder for a well-worked-out post, I wanted to offer a rough and ready definition here.

As this is the first time I’ve written about this idea, this a bit of a ramble. I’ll take on authority briefly, then add the importance of algorithms.

Khotyn is a small town in Moldova. That is a piece of information about Eastern European geography, and one that could be right or could be wrong. You’ve probably never heard of Khotyn, so you have to decide if you’re going to take my word for it. (The “it” you’d be taking my word for is your belief that Khotyn is a town in Moldova.)

Do you trust me? You don’t have much to go on, and you’d probably fall back on social judgement — do other people vouch for my knowledge of European geography and my likelihood to tell the truth? Some of these social judgments might be informal — do other people seem to trust me? — while others might be formal — do I have certification from an institution that will vouch for my knowledge of Eastern Europe? These groups would in turn have to seem trustworthy for you to accept their judgment of me. (It’s turtles all the way down.)

The social characteristic of deciding who to trust is a key feature of authority — were you to say “I have it on good authority that Khotyn is a town in Moldova”, you’d be saying that you trust me to know and disclose that information accurately, not just because you trust me, but because some other group has vouched, formally or informally, for my trustworthiness.

This is a compressed telling, and swerves around many epistemological potholes, such as information that can’t be evaluated independently (“I love you”), information that is correct by definition (“The American Psychiatric Association says there is a mental disorder called psychosis”), or authorities making untestable propositions (“God hates it when you eat shrimp.”) Even accepting those limits, though, the assertion that Khotyn is in Moldova provides enough of an illustration here, because it’s false. Khotyn is in Ukraine.

And this is where authority begins to work its magic. If you told someone who knew better about the Moldovan town of Khotyn, and they asked where you got that incorrect bit of information, you’d have to say “Some guy on the internet said so.” See how silly you’d feel?

Now imagine answering that question “Well, Encyclopedia Britannica said so!” You wouldn’t be any less wrong, but you’d feel less silly. (Britannica did indeed wrongly assert, for years, that Khotyn was in Moldova, one of a collection of mistakes discovered in 2005 by a boy in London.) Why would you feel less silly getting the same wrong information from Britannica than from me? Because Britannica is an authoritative source.

Authority thus performs a dual function; looking to authorities is a way of increasing the likelihood of being right, and of reducing the penalty for being wrong. An authoritative source isn’t just a source you trust; it’s a source you and other members of your reference group trust together. This is the non-lawyer’s version of “due diligence”; it’s impossible to be right all the time, but it’s much better to be wrong on good authority than otherwise, because if you’re wrong on good authority, it’s not your fault.

(As an aside, the existence of sources everyone accepts can be quite pernicious — in the US, the ratings agencies Moodys, Standard & Poor’s, and Fitch did more than any other group of institutions to bring the global financial system to the brink of ruin, by debauching their assertions to investors about the riskiness of synthetic assets. Those investors accepted the judgement of the ratings agencies because everyone else was too. Like everything social, this is not a problem with a solution, just a dilemma with various equilibrium states, each of which in turn has characteristic disadvantages.)

Algorithmic authority is the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying “Trust this because you trust me.” This model of authority differs from personal or institutional authority, and has, I think, three critical characteristics.

First, it takes in material from multiple sources, which sources themselves are not universally vetted for their trustworthiness, and it combines those sources in a way that doesn’t rely on any human manager to sign off on the results before they are published. This is how Google’s PageRank algorithm works, it’s how Twitscoop’s zeitgeist measurement works, it’s how Wikipedia’s post hoc peer review works. At this point, its just an information tool.

Second, it produces good results, and as a consequence people come to trust it. At this point, it’s become a valuable information tool, but not yet anything more.

The third characteristic is when people become aware not just of their own trust but of the trust of others: “I use Wikipedia all the time, and other members of my group do as well.” Once everyone in the group has this realization, checking Wikipedia is tantamount to answering the kinds of questions Wikipedia purports to answer, for that group. This is the transition to algorithmic authority.

As the philosopher John Searle describes social facts, they rely on the formulation X counts as Y in C — in this case, Wikipedia comes to count as an acceptable source of answers for a particular group.

There’s a spectrum of authority from “Good enough to settle a bar bet” to “Evidence to include in a dissertation defense”, and most uses of algorithmic authority right now cluster around the inebriated end of that spectrum, but the important thing is that it is a spectrum, that algorithmic authority is on it, and that current forces seem set to push it further up the spectrum to an increasing number and variety of groups that regard these kinds of sources as authoritative.

There are people horrified by this prospect, but the criticism that Wikipedia, say, is not an “authoritative source” is an attempt to end the debate by hiding the fact that authority is a social agreement, not a culturally independent fact. Authority is as a authority does.

It’s also worth noting that algorithmic authority isn’t tied to digital data or even late-model information tools. The design of Wikileaks and Citizendium and Apache all use human vetting by actors prized for their expertise as a key part of the process. What seems important is that the decision to trust Google search, say, can’t be explained as a simple extension of previous models. (Whereas the old Yahoo directory model was, specifically, an institutional model, and one that failed at scale.)

As more people come to realize that not only do they look to unsupervised processes for answers to certain questions, but that their friends do as well, those groups will come to treat those resources as authoritative. Which means that, for those groups, they will be authoritative, since there’s no root authority to construct from. (I lied before. It’s not turtles all the way down; its a network of inter-referential turtles.)

Now there are boundary problems with this definition, of course; we trust spreadsheet tools to handle large data sets we can’t inspect by eye, and we trust scientific results in part because of the scientific method. Also, although Wikipedia doesn’t ask you to trust particular contributors, it is not algorithmic in the same way PageRank is. As a result, the name may be better replaced by something else.

But the core of the idea is this: algorithmic authority handles the “Garbage In, Garbage Out” problem by accepting the garbage as an input, rather than trying to clean the data first; it provides the output to the end user without any human supervisor checking it at the penultimate step; and these processes are eroding the previous institutional monopoly on the kind of authority we are used to in a number of public spheres, including the sphere of news.

105 Responses to “A Speculative Post on the Idea of Algorithmic Authority”

  1. How Important Is Google’s Index to Its Continued Success? | Certain Habits Says:

    [...] authority. Clay Shirky discussed algorithmic authority in a recent, must read post. (Go ahead. Read the whole thing. [...]

  2. derek Says:

    People get upset if you tell them Wikipedia is as good an authority as Britannica. What they should be upset by is the revelation that Britannica was never any better an authority than Wikipedia.

  3. george b Says:

    meant to say “Algorithmic authority is a QUANtitative quality,…”

  4. george b Says:

    Algorithmic authority is a qualitative quality, suitable for inference for now and eventually capable of verified calculation defined in situ. The challenge is to determine the algorithmic logic that can re-question the answers toward a more and more finite and accurate result. So our search must be aimed at finding the simplest and best questions that are scalable for that infinite and ongoing process (what turtle can carry “THE” turtle..?).

    Also like the old metaphor, we all must “feel” and individually report how we each “perceive the elephant” in order to see how we might all mutually benefit from “riding the elephant” **. Tribal knowledge must become global insight.

    How can we normalize the data (structurally), collect the data (socially), and analyze the data (algorithmically). Until then we may not enjoy the wisdom that comes from the information that comes from the perspectives that come from the stake-holders. Instead, we may continue making up answers to sell historical artifact newspapers and hysterical(?) non-fact renderings of so-called news to suit our political agendas or other pre-conceived notions.

    Will someone please just ask “WHY” (and asking it five times can help more).
    “Because”, until we get to root cause, we are stuck at “because”, just because..

    (**Some jackasses think that keeping the elephant out of the room is useful. More likely it is just another example of bad thinking, bad politics, or both.)

  5. barnard Says:

    Your definition of “algorithmic authority” seems to be a bit more specific than Nick Szabo’s term “authoritative automata”:


    “When we hear the word ‘authority’ we often think of people who have set themselves up in positions of power and on whom we have become dependent. But there is another kind of authority, often more reliable and trustworthy, that can be provided by things. These are physical standards, security devices, automata, and other objects by which we coordinate our interactions with fellow humans, especially with strangers who might not otherwise be trustworthy. These technologies are crucial to our modern civilization and its ability to make dealings with strangers more secure and reliable.”

    He includes clocks, stoplights, Google page rank, and aggregators as examples, as well as cash registers:

    “To what extent will computer algorithms come to serve as authorities? We’ve already seen one algorithm that has been in use for centuries: the adding algorithm in adding machines and cash registers.”

    We’ve all heard the operator at other end of the phone invoke the authority of the algorithm, when we object to something the operator tells us: “that’s what the computer says.”

  6. nnyhav Says:

    The brilliance of placing a parenthetical aside on rating agencies just prior to listing salient characteristics of “algorithmic authority” which parallel those of CDOs is to be applauded. Speculative indeed.

  7. Ian Woollard Says:

    I doubt you were ever properly allowed to quote Encyclopedia Britannica in a thesis that you intended to defend. You’re supposed to do a proper research, not steal somebody else’s.

  8. filtering when you’re small « Aggregate « Innovation Leadership Network Says:

    [...] you’re a small enterprise; then Clay Shirky wrote about almost exactly the same subject in a post on algorithmic authority; then I ended up talking about the same topic with Paul Moynagh from the innovation consulting [...]

  9. Steve Farrell Says:

    We’ve been engaged in an experiment in algorithmic authority: The Hourly Press (http://hourlypress.com).

    Our model is this: someone curates a community, the community is analyzed for an authority structure, and the choices of that community are aggregated to produce a timely list of links. (Specifically, the curator selects authority hubs (‘editors’), we follow everyone that those hubs are following on twitter, and the links that they shared in the past 12 hours are giving one vote per citation, multiplied by the number of hubs the citer is followed by.)

    Now, in this model, there is a human standing by the result: the curator. For example, Nieman Journalism Lab is (at least in part) standing by the selection of links on their newspage: http://niemanlab.hourlypress.com/. However, they have not done the actual selection of links… rather, they chose how to distribute the authority to select newsworthy links.

    We’re also experimenting with a model that requires less manual curation: users are clustered based on what they’re talking about and who they are following; HITS/PageRank/etc is used to analyze the latent authority structure of those clusters; links are selected based on frequency and authority. The legitimacy of this approach would be grounded in algorithms and (inherently arbitrary) parameters used to program them.

    Now, I need to point out that, in practice, these approaches aren’t that different. In both cases, the decisions of the key social network connectors in twitter are what drives the selection process. In both the “manual” and “automatic” modes, choices are made to delegate the decision of authority to such hubs. The difference, I think, is whether an institution is empowered to tweak that delegation to its own ends or judgment.

    We don’t know whether institutionally-grounded or emergent authority structures will produce the better news page. We hope to find out.

    Also, while the judgment I’ve been alluding to so far is about newsworthiness, with our “retrospective news” model (http://lynheadley.posterous.com/retrospective-news-an-example), the selections could be used to direct funding after-the-fact, affecting not only what gets spread, but what gets produced.

  10. links for 2009-11-16 | Joanna Geary Says:

    [...] A Speculative Post on the Idea of Algorithmic Authority « Clay Shirky "There’s a spectrum of authority from “Good enough to settle a bar bet” to “Evidence to include in a dissertation defense”, and most uses of algorithmic authority right now cluster around the inebriated end of that spectrum, but the important thing is that it is a spectrum, that algorithmic authority is on it, and that current forces seem set to push it further up the spectrum to an increasing number and variety of groups that regard these kinds of sources as authoritative." (tags: newspapers trust) Share this [...]

  11. Michael Brey Says:


    In “Obedience to Authority; An Experimental View” (1974) Milgram points out that situational influences on behavior are often much greater than common sense would dictate. In other words: authority originates rather through context than through persona. In Milgrams own words: “the social psychology of this century reveals a major lesson: Often, it is not so much the kind of person a man (or woman) is as the kind of situation in which he (or she) finds him (or her) self that determines how he (or she) will act”.
    Thank you for your thoughts on Algorithmic Authority. Now I ask: What would Milgram do in a world where the real small world of each human being was substituted by ever changing unending small worlds?

  12. paolo Says:

    You say “it’s impossible to be right all the time, but it’s much better to be wrong on good authority than otherwise, because if you’re wrong on good authority, it’s not your fault.”. Of course nowadays, it is easier and easier to find a reference group with ideas as “minoritarians” as you would like …
    i can suggest you “Republic.com” by Cass R. Sunsteinhttp://press.princeton.edu/titles/7014.html and all his following books

    You might also like this picture “Trust us, we’re expert” ;)

    You might … uhm … like a paper of mine “Trust metrics on controversial users: balancing between tyranny of the majority and echo chambers”
    I’ve been mulling about “trust networks” (even with empirical experiments…) for years now … but unfortunately without reaching your levels of uber-clarity …

  13. paolo Says:

    Being a total relativist, I would not say that “The American Psychiatric Association says there is a mental disorder called psychosis” is correct by definition. What you indicate as “The American Psychiatric Association” might be just few people of them, someone who calls itself “The American Psychiatric Association” but it is in reality “The American Psychiatric Convention” and there might be more than one “The American Psychiatric Association” registered in different places, such as on the UN registry and the National US Health registry and the Campaign for a different Health registry and so on …

    I would not even say that “Gravity is a force” or “on Earth there is gravity” are correct by definition. I would not even say that “2+2=4” is correct by definition. What if I say that 2+2=5?

    And what about “Pluto is not a planet”? ;)
    Or more painfully, “Palestine is not a state” …

  14. paolo Says:

    “… and these processes are eroding the previous institutional monopoly on the kind of authority we are used to in a number of public spheres, including the sphere of news.”
    And in the sphere of science!
    And in the sphere of Money (read “banks”, what if my reference group and I stop accepting the central banks system as the authority which can say “you have 3000 euros”? That would be “the end of the world as we know it!!!” See http://www.gnuband.org/2007/04/09/money_as_debt/ )

  15. Television Archiving » Blog Archive » A Speculative Post on the Idea of Algorithmic Authority « Clay Shirky Says:

    [...] Visit A Speculative Post on the Idea of Algorithmic Authority « Clay Shirky Hype: [...]

  16. azeem Says:

    Hey Clay

    Great post–very thoughtful and summed up some things we were thinking about. I did a post responding/building on what you said


  17. Lumiere Says:

    [..] Algorithmic authority is not an unmanaged process, contrary to this claim. [..]

  18. dilbert dogbert Says:

    The process of growing up is the process whereby you learn that all the giants in your life, parents etc, have feet of clay. Some people never go thru this process with the other giants in their lives – governments, political parties media etc.

  19. links for 2009-11-16 | Bailout and Financial Crisis News Says:

    [...] Clay Shirky: A Speculative Post on the Idea of Algorithmic Authority [...]

  20. Derek Powazek - links for 2009-11-16 Says:

    [...] A Speculative Post on the Idea of Algorithmic Authority « Clay Shirky "Authority is a social agreement, not a culturally independent fact." A must-read. (tags: authority community clayshirky) [...]

  21. Conor White-Sullivan Says:

    Great post. Made me think of Beth Noveck’s Wiki Government article

    The premise there was that government does not get better information from experts, and should open up collaborative platforms like “peer to patent” to allow self-selected citizens to influence policy.

    Here is a quote:

    “In his award-winning book On Political Judgment, social psychologist Philip Tetlock analyzed the predictions of those professionals who advise government about political and economic trends. Pitting these professional pundits against minimalist performance benchmarks, he found “few signs that expertise translates into greater ability to make either ‘well-calibrated’ or ‘discriminating’ forecasts.”

    It turns out that professional status has much less bearing on the quality of information than we might assume, and that professionals–whether in politics or other domains–are notoriously unsuccessful at making informed predictions.”


    Just like everyone else–and possibly to an even greater degree, the reason that government goes with “professional” and “accredited” information sources might not be because they are more often right, but because they are trusted by others in government, and therefore have less impact on the standing of the public servant when they are wrong.

    The idea of distributed social authority has big implications for collaborative platforms and governance. Hopefully government will put some authority in the hands of citizens faster than the school teachers who are still trying to tell kids to ignore Wikipedia.

  22. Brown Bourne Says:

    “Nietzsche is Dead” -God

    “Clay Shirky is not an authority on Wikipedia’s authoritativeness.” -Wikipedia as of 1:35 am on Nov. 16 (http://en.wikipedia.org/wiki/Clay_Shirky)

    Brown Bourne
    Blog: http://brownbourne.wordpress.com
    Roll: http://brownbourne.wikidot.com

  23. metarand » Blog Archive » Algorithmic Authority versus Digital Curation: The spectrum’s edge Says:

    [...] Shirky has coalesced some thoughts around the notion of algorithmic authority. In it he talks about the nature of authority within the [...]

  24. Andrew Gradman Says:

    I must confess, I’m partial to the comment above by DanDotLewis@Twitter).

    He writes, “I read this post having never heard of Khotyn, yet I believe you when you say it’s in the Ukraine. Why? Because you have no reason to steer me wrong (twice)”.

    That is similar to my own account for why we tend to trust uncited claims on Wikipedia (e.g. “After pollination, a tube grows from the pollen through the stigma into the ovary to the ovule and sperm are transferred from the pollen to the ovule, within the ovule the sperm unites with the egg, forming a diploid zygote”).

    However, the fidelity I am thinking of is in terms of interference, not motives. Certain claims are, on their face, relatively “immune” to the game of “telephone;” they’re just too objective to be vulnerable to error; it takes a vivid imagination to construct a story in which the transcriber(s) of that statement somehow diverged from their “authorities”.

    What we’re really interested in is the “less objective” (“easily confused”) claims. As a law student Wikipedian (who deals with this latter, “easily confused” kind of claim more often than most Wikipedians), I doubt that Wikipedia could EVER become an “authority” on these claims. It can become “reliable” — but only by providing footnotes to authorities! In this regard, DanDotLewis@Twitter is also correct: “It’s clear that Wikipedia is not, itself, accountable for the content it holds, and it’s also clear that there really is no way to hold the authors (and editors) of Wikipedia accountable for their content either. … really, you want to click the Wikipedia citation link or the first few Google links to make sure you trust the source upon which the algorithm relies.”

    I spend much of my time trying to engineer a method to groupsource the learning we do in law school. We’re developing an internal Wiki to host classnotes, outlines, casebriefs, etc.– essentially, a mirror of Wikipedia, but limited to our community. Will this platform ever develop to the point where my classmates can rely on “heuristic due diligence” when trusting its content? Only in three respects.

    1) In the first sense that DanDotLewis@Twitter describes — i.e., “I believe this claim because I can hardly imagine a universe in which someone got that claim WRONG.” It is the sort of information immune to the game of telephone. e.g., “The plaintiff in Roe v. Wade was a pregnant woman whose real name was… “.


    2) In his second sense: we see a claim that is not “telephone-proof on its face”, but since it has a footnote to a “reliable authority”, we don’t check it. (Well, then we’re making the “telephone” argument again: “I can hardly imagine a universe in which someone sought out a footnote to a Reliable Authority and nevertheless got that claim WRONG.”)


    3) A special sense which DanDotLewis did not mention: I can tell you that in developing my own classnotes, I don’t always read the assigned readings or go to class. Instead, I can take six or seven sets of my classmates’ notes and attempt to reconcile them. I discard, as unreliable, claims that are in fewer than (say) two of the sets of notes; my final document aggregates all those claims which are in three or four sets of notes. I trust my final (“meta”) set of notes because I know the “story” behind how they were made. But if I were to circulate them “without” that story (e.g. without a disclaimer, or without footnotes to the other notes), people would place no more reliance in them than I did on any other set of notes.

    In this sense, your “Third Feature” of algorithmic authority correctly distinguishes those claims which are persuasive to me, versus the claims which are persuasive to others. However, let’s look at what I’d have to DO to make my “meta” notes persuasive to others.

    a) I could include footnotes to each of the three-or-four sets of notes that I was aggregating. That would be like using WikiTrust to test claims in Wikipedia. But I think this is disfavored BY EMPIRICAL OBSERVATION — That’s just not what Wikipedians do!

    b) I could precede it with a disclaimer, “I made these notes by aggregating lots of notes.” We’re no longer asking people to believe my ability to understand the law; we’re only asking them to trust my ability to do simple counting and copy-paste stuff in documents. This works, so long as the meta-text doesn’t get stripped from the notes.

    c) I could include footnotes to “RELIABLE SOURCES.” It would be costly for ME to do it alone, but it would be cheap to groupsource. This is FAVORED because the footnotes to reliable sources are part of the text; and because they’re VERIFIABLE.


    All i can say is this: I’m going to keep relying on (c), because my classmates have been remarkably unpersuaded by the meta-text in (b). They are WRONG not to trust me, but still, they don’t.

  25. Tom Says:

    Congratulations, you’ve just rediscovered Hobbes’ argument regarding authority and truth claims from *The Leviathan*.

  26. Bruno Boutot Says:

    Very interesting.
    When I read “Khotyn” in the 4th paragraph, I did something I do 100 times a day: I selected the word, clicked right “Search Google for” and got 7 results on the first page with “Ukraine”. Google is not the authority but it finds seven “convergent” sources.

    I had a conversation the other day with Florence Devouard of French Wikimedia about members’ identity in communities. Though we agreed that email and IP numbers don’t mean much, there is something as “convergent” or “distributed” identity when members add in their personal page their facebook page, their delicious account, their flickr gallery, etc.

    Also, when using Google language tools, I don’t use “Translate text” but “Search across languages” which searches for the same sentence or group of words in texts already translated and gives 10 results on the first page. This is another case of “distributed” or “convergent” results.

    I am not sure that this is the same thing as “authority”, but it’s akin to triangulation: obtaining a result from the convergence of unrelated sources.

  27. Frank Pasquale Says:

    The algorithms are often secret…leading me to think about Christopher Kutz’s recent article on the “repugnance of secret law”:

    I have a paper on this if anyone’s interested.

  28. Larry Irons Says:

    “Authority is as a authority does.” To me this is the point on which your perspective in this post pivots. You say authority is a social agreement rather than a culturally independent fact. And I think you are right about that. However, I don’t think the two are different in kind as much as more or less malleable to the action of agencies, human or otherwise.

    Both social agreements and culturally independent facts depend on significant symbols to direct inquiry and questioning. We don’t just go looking for something to find when we search the web. We go looking for particular kinds of things. It seems to me that for your point of view to persuade it needs to argue that the kind of things we go looking for on the web depend on an “algorithmic authority.”

    Can Google predict what shows up in its page rankings, or does it simply predict how it shows up? Defining the way we categorize things is how I understand authority, at least in the realm of information. You need to look closely at Lera Boroditsky’s research on cognition and language, particularly on her contentions regarding Sapir-Whorf.

  29. jeff ubois Says:

    Michael Jensen’s piece, The New Metrics of Scholarly Authority, is now locked behind a pay wall by the Chronicle of Higher Education (http://chronicle.com/article/The-New-Metrics-of-Scholarly/5449), but it’s nicely summarized at http://scanblog.blogspot.com/2007/07/new-metrics-of-scholarly-authority.html, and well worth a read.

  30. Yishay Mor Says:

    As always, a great, thought provoking read. One that would take some time to get my head around. Still, a few thoughts.
    Wikipedia and google are both considered as “authority” by many. I say “authority” and not authority, because the fact that they are normally accepted as true should not imply normativity. There is a huge difference between them: first, the process of establishing a statement in wikipedia is transparent and easy to trace. Second, it involves human agency.
    Google does not provide authority, in the normative sense. It provides likelyhood. Wikipedia provides a more efficient and cost effective implementation of essentially the same protocol used by Britannica.

  31. Andrew Gradman Says:

    forgive me for a redundant post, but I wanted to make sure you saw the comment I made at the bottom of my last post: “Heuristic due diligence” I think would be a great term for what you’re describing.

  32. Stephen Wilson Says:

    Two points that may or may not help this very interesting analysis.

    Firstly, the idea that being wrong can be ameliorated by the way in which you got to be wrong, is powerful. It lies at the heart of the way legal liability is gauged. If I write software for a medical device and there’s a bug in my code and someone dies, then my liability steadily decreases over the following scenarios:
    - my code was untested
    - it was tested by me
    - it was tested independently by someone else
    - that other tester followed a testing laboratory standard like ISO 17025
    - the tester’s ISO 17025 compliance was formally inspected by a national accreditation authority.

    [It's turtles down only two or three levels, stopping at the International Laboratory Accreditation Cooperation ILAC ;-)]

    That is, process is supremely important.

    My second point is that not all processes are algorithmic, so the term “algorithmic authority” might be wrong — or else it might be deeply correct and instructive.
    An algorithm is a formal procedure that can be carried out mechanically (by a digital computer, or Turing machine) without conscious intervention. With the same inputs and initial conditions, the outcome of an algorithm is always the same.

    Logicians and philosophers have known for a long time that there are fundamental constraints on what can be accomplished in algorithmic fashion. In particular, there are problems that have no algorithmic solution. The most famous example is probably the “Stopping Problem” in computer science: it is not possible to write an algorithm that will tell you if any given computer program will stop or not. This inability is profound, because the Stopping Problem is simple compared with some of the other things that people might try to do algorithmically.

    So my question is, just how authoritative can any truly algorithmic authority ever be? It will be fundamentally limited in ways that are probably difficult to predict.

    But then again, maybe that’s the (unintended) point of the term. We might say that algorithmic authority can generate knowledge, but never “wisdom”.

  33. David Locke Says:

    If you ask a librarian what authority is, it is a list of authors that lets you take all the permutations of a name in a published entity and direct you to the author’s full name along with their various spellings and pseudonyms.

    What you end up with is a single string from any number of search terms. Librarians evolved authority lists to reduce the need to list every pseudonym in each catalog record. The catalog record lists the author as their name appears on the published entity.

    There is no issue of trust relative to this list. If you want a trust index, then call it a trust index.

    Computer science types always come up with their own version of other people’s expertise and apply different names to it. Yahoo did it when they created “the first internet ontology,” this without asking librarians how they’ve done it since hell thawed out.

    Be a hero by ignoring experts. IT in particular sings the kill the functional silos song, but why do we have functional silos? We have them because they know (75%) things. Political silos, yes, kill those. In the end meaning is completely trashed when it is automated.

    Algorithmic authority started sounding like an index of algorithms maybe tied to the software patent index, which would be followed by the enforcement of registration of each algorithm used in our code, as a prelude to a recording-industry-like royalties, anti-sampling, and draconian license enforcement mechanism aimed at extracting payment for using that queue algorithm you learned back in data structures class.

    Don’t give the whacks any ideas.

  34. Andrew Gradman Says:

    How about “neural authority”, or “network authority,” or “emergent authority”?

    The difficulty is that we need the term to not gloss over that peculiar quality of the statement “Khotyn-is-in-Moldava” — namely, that it’s not SO social that there ISN’T a right answer. (Even if that statement doesn’t fully meet that condition, there are statements that do; as Ian Hacking said, “if you can spray them, then they are real” —
    http://bit.ly/1xgPPv )

    Thinking out loud: We’re trying to get this term to convey the three elements you’ve attributed to it:
    (1) Due to the character of the assembly (not the pieces), (2) statements emerge that are true (3) and trusted.
    [I am not persuaded by the distinction you make in (3), that “people become aware not just of their own trust but of the trust of others”. I would expect my-trust and public-trust to arise from a common mechanism: namely, that the story for how the claim was constructed meets a shared definition of “trustworthy.” The only reason that you and I would diverge is if we don’t share that definition.)

    If you get writer’s block on this point (as I already have!), maybe you should reconsider whether the second word should be “authority.” When you put an adjective before that word, it becomes unclear whether the adjective is meant to describe “the story for how the claim came to be recognized as authoritative” (which is what you mean for it to do), or whether that adjective means to describe “authority in a qualified sense — algorithmic rather than true.”

    How about “Heuristic due diligence?”

  35. A Speculative Post on the Idea of Algorithmic Authority « Clay Shirky « Netcrema – creme de la social news via digg + delicious + stumpleupon + reddit Says:

    [...] A Speculative Post on the Idea of Algorithmic Authority « Clay Shirkyshirky.com [...]

  36. Scott Mattoon Says:

    One could argue that the recent financial meltdown can be attributed to the clash of algorithmic authority with legacy hierarchical systems of authority. When established ratings systems could no longer keep pace with derivative markets, algorithmic based investment decision ran rough shod over the established systems of authority.

    About a year ago, a piece on FT.com by Sam Jones entitled When junk was gold reminded me how much the credit world has changed since my dad left the finance business more than 20 years ago. He led one of the U.S. largest leasing firms at the time. They relied upon the ratings agencies that are dissected in this FT.com article, as did thousands of other businesses right up until September 2008 when Moody’s, S&P and Fitch ratings suddenly lost their meaning.

    For a subscription fee, my dad’s firm had access to the professional ratings of Moody’s, which earned all its revenue through a subscription service to investors. His business, and other firms in the business of credit, did not view the dependence on ratings agencies as a risk to their own business. Afterall, Moody’s were experts at identifying risk to investors. A much greater exposure to a leasing company would be to forego the use of ratings in their decision making, and instead tackle the research themselves. The ratings services epitomized capitalistic efficiency. Businesses were not circumspect of ratings because the ratings agencies worked hard to avoid conflict of interest and brands like Moody’s were nothing if not trustworthy.

    Then, in the early nineties, “structured finance” arrived on the investment scene. The product of “algorithmic authority”, these financial instruments launched an arms race of increasingly complex scenarios that burdened the ratings companies to the point of abandoning thorough research, and impelling them to accept fees from bond issuers as they struggled to maintain their preeminence over the measurement of investment risk. This led to billion dollar deals rated in just hours and situations like Enron receiving investment-grade ratings just a month before its collapse. As my dad puts it, “The ratings guys were asleep at the wheel.”

    Now, more than a year since the trust in vaunted institutions like Moody’s has been decimated, we’re picking up the pieces. From the rubble, I expect, new ratings systems will rise. It remains to be seen whether these ratings systems will be more like the old ones my dad’s generation of investors trusted, or new systems deriving from networks of people acting as stewards of information, or if it will be one of pure data analytics across a sea of publicly accessible information. I suspect that the world of finance is not ready for a social-network based ratings system, but I also believe we’re only a small number of financial crises away from the power of organized crowds overtaking hierarchical institutions as the dominant authority on financial risk analysis.

  37. Alex Howard Says:

    As I read this, the results of systems founded upon algorithmic authority sounded more and more like the prediction markets Andrew McAfee has written about in enterprise social computing implementations.

    When computing becomes social, interesting things happen, especially when the community is willing to listen consider questions posed influential nodes within the given system. The key to an accurate prediction for a given topic, in those contexts, appears to be founded in the diversity of the network that receives the query.

    Over time, the most authoritative subject experts in those systems become clear through both their own contributions and the citations you receive. As you point out, the proof the success of such a framework exists at Wikipedia or within Google’s PageRank.

    In any case, I’m glad you shared your thoughts on social judgment of trust. Deciding who to believe and why is more challenging than ever in the Information Age.

  38. Brian Slesinsky Says:

    Or perhaps a better term would be machine-assisted conventional wisdom. It’s closely related to rumor and gossip (in the short term) and tradition (in the long term).

    Conventional wisdom is accountable to nobody and there is no recourse if it’s wrong, and yet it arises from social processes and is often trusted. We should expect that putting machines into the middle of the process of generating it will have drastic effects on how conventional wisdom arises. Certainly this has been the case for television, email, search engines, blogging, and social networks.

  39. David Semeria Says:

    If that was a ramble, I wouldn’t like to see what you can do with a map!

    Great stuff.

    It’s like democracy; the majority is right even when it’s wrong.

  40. Brian Slesinsky Says:

    Perhaps the right term would be machine-assisted authority. For example, it’s easier to tell the truth with a camera or a tape recorder than when relying on your unaided memory. The proliferation of camera phones means that many more people can participate in making authoritative observations.

    Another example is the hyperlink. Bibliographical citations have been around a long time, but they were so difficult to use that in practice they were only used by specialists. The hyperlink allows more people to participate in making secondary accounts that are supported by citations, and this is what allows Wikipedia to exist.

    You could say that a search engine is very heavily machine-assisted authority.

  41. Terry Jones (@terrycojones) Says:

    I meant to add that some part of the idea of Algorithmic Authority can be dropped to the level of data.

    If the data model allows for the accumulation of trust information, it can provide the foundation for multiple algorithmic approaches. It also means that the final word on the subject is not in the hands of any one algorithm (or the application that uses it, if you prefer). That’s also an important belief behind FluidDB: that the last word on information (our information!) should not be in the hands of an application. Having an app provide an API to its (our?) data is also not enough. The data itself should have an API. But that’s another subject altogether.

  42. rune ytreberg Says:


    Could you call algo.. authority = social authority?

  43. Terry Jones (@terrycojones) Says:

    Hi Clay

    As usual, I agree, and as usual there are nice correspondences with what you think/write and what we’re trying to build with FluidDB.

    One of the aims of FluidDB is to provide an evolutionary information storage architecture. One aspect of this is the evolution of reputation and trust, both of data and of actors (users and apps). FluidDB provides an explicit locus for information of that kind to accumulate, it has identities, it has an information control model that allows the accumulation no matter what (no-one gets to stop anyone else from adding to existing data), and it provides a query language for getting at that new information, whatever it is. This probably sounds a bit abstract / general – apologies if so.

    The idea is that over time people and applications that use FluidDB can come to trust various sources *and* have a place to leave additional information (metadata if you like) about what and who they use / trust, etc. That’s all completely open-ended due to the information control model.

    I’ll stop for now, but I’m happy to say more, as always. Thanks for taking the time to write up your thoughts.


  44. glmaranto Says:

    I went from this post to one on John Grant’s greenormal blog and found the contrast striking: there are times when “authority”, “reliability”, and “accountability” are singularly rejected.

    Here’s the link to Grant’s post, “Legal Disclaimer”:

  45. DanDotLewis@Twitter Says:

    Dave Winer, in the past, has been one of the many who have been critical of Wikipedia because it is not (and in his eyes, cannot be) authoritative. And if you read what he’s written, I think you’ll find, as I do, that he’s simply conflating “authority” with “accountability.” It’s clear that Wikipedia is not, itself, accountable for the content it holds, and it’s also clear that there really is no way to hold the authors (and editors) of Wikipedia accountable for their content either.

    However, I don’t think that his mistake — and the mistake of others in kind — is best rebuffed by your naked assertion that Wikipedia is authoritative because, tautologically, it’s authoritative. Rather, I think you’re conflating “authority” with “reliability.”

    Take your Khotyn example. I read this post having never heard of Khotyn, yet I believe you when you say it’s in the Ukraine. Why? Because you have no reason to steer me wrong (twice); it would detract from your argument if you did, and knowing what I know if you, I believe you have the tools and ability to find an authoritative source (if you aren’t one yourself) as to where the town is from. Plus, the notion that there’s a Ukranian town called Khotyn passes my BS detector. All combined, it boils down to reliability.

    But authoritative? Not so much, for the simple reason that — given my ignorance as to *why* you know Khotyn is in the Ukraine — it’s possible that you are innocently wrong; and, if you are indeed wrong, no one important would hold it against you. Heck, you could probably explain why the error occurred and maintain some reasonable sense of reliability regarding Eastern European geography from your readers!

    I do agree, however, that at some point, reliability leads to authority. However, algorithmic authority, as you describe it, seems to fail without the adjective; that is, it’s as authoritative as an algorithmically developed source can get, and therefore, “merely” “very very reliable.” It’s good enough to settle a bar bet, and may even be OK to cite for your job, but really, you want to click the Wikipedia citation link or the first few Google links to make sure you trust the source upon which the algorithm relies.

  46. Brian Slesinsky Says:

    This reminds me of a comic:


  47. Scott Rosenberg Says:

    The post is, as usual, an inspiring model of clarity, Clay. I think you’re right, though, to sense that the term “algorithmic authority” may need to go: it’s likely to be distractingly confusing.

    The authority you describe derives from processes that are mostly social. You could certainly describe such processes as “social algorithms”; they are, after all, sets of instructions to follow. But the term “algorithm” is almot exclusively associated in the public mind with “computer algorithm.” And that means that people who hear the term “algorithmic authority” will jump to the mistaken conclusion that you’re talking about “ceding authority to machines,” and they will box with that straw man rather than grapple with the real point.

    This is admittedly more a matter of rhetorical tactics than a substantive point, but it might be worth thinking about…

  48. NS Says:

    You are absolutely right. A professor at Stern School recommended Wikipedia for external reference and it changed the view of most students in the class, who until then thought its entries may/may not be factually correct.

  49. Brian Slesinsky Says:

    Here are a few other examples for comparison:

    - A dictionary’s authority comes from its editors. But if the dictionary editors have a descriptivist philosophy, they are simply trying to describe the language as it is actually used. Their authority comes from their commitment to being unbiased aggregators of language usage by people who are not authorities.

    - Elections depend on the authority of government election officials. But their authority comes from running fair elections and by protecting the aggregation process from people who would bias the results. Again, they attempt to be unbiased aggregators of voters’ decisions,

    Similarly I would say that a search engine’s authority comes from the commitment of its maintainers to avoid bias and protect it from people who would bias the results.

  50. Jay Steele Says:

    Great article Clay. I love the way you write!

    As I read your article I was struck with the notion that one of the ultimate representations of algorithmic authority is the US Dollar. While it may have had its origins in a singular institution, I believe it is the three characteristics you outline above that allow it to continue to be used as a medium of exchange.

    Take care.


Comments are closed.