30] William Dembski “dispensed with” the Explanatory Filter (EF) and thus Intelligent Design cannot work
This quote by Dembski is probably what you are referring to:
I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.
In a nutshell: Bill made a quick off-the-cuff remark using an unfortunately ambiguous phrase that was immediately latched-on to and grossly distorted by Darwinists, who claimed that the “EF does not work” and that “it is a zombie still being pushed by ID proponents despite Bill disavowing it years ago.” But in fact, as the context makes clear – i.e. we are dealing with a
real case of “quote-mining” [cf.
here vs.
here] — the CSI concept is in part based on the
properly understood logic of the EF. Just, having gone though the logic, it is easier and “clearer” to then use “straight CSI” as an empirically well-supported, reliable sign of design.
In greater detail: The above is the point of Dembski’s clarifying remarks that: “. . . what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable.”[For illustrative instance, contextually responsive ASCII text in English of at least 143 characters is a “reasonably good example” of CSI. How many cases of such text can you cite that were
wholly produced by chance and/or necessity without design (which includes the design of Genetic Algorithms and their search targets and/or oracles that broadcast “warmer/cooler”)?]
Dembski
responded to such latching-on as follows, first acknowledging that he had spoken “off-hand” and then clarifying his position in light of the unfortunate ambiguity of the phrasal verb
dispensed with:
In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection.
[….]
I came up with the EF on observing example after example in which people were trying to sift among necessity, chance, and design to come up with the right explanation. The EF is what philosophers of science call a “rational reconstruction” — it takes pre-theoretic ordinary reasoning and attempts to give it logical precision. But what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable. In THE DESIGN OF LIFE (published 2007), I simply go with SC. In UNDERSTANDING INTELLIGENT DESIGN (published 2008), I go back to the EF. I was thinking of just sticking with SC in the future, but with critics crowing about the demise of the EF, I’ll make sure it stays in circulation.
Underlying issue: Now, too, the “rational reconstruction” basis for the EF as it is presented (especially in flowcharts circa 1998) implies that there are facets in the EF that are contextual, intuitive and/or implicit. For instance, even so simple a case as a tumbling die that then settles has necessity (gravity), chance (rolling and tumbling) and design (tossing a die to play a game, and/or the die may be
loaded) as possible inputs. So, in applying the EF, we must first isolate relevant
aspects of the situation, object or system under study, and apply the EF to each key aspect in turn. Then, we can draw up an overall picture that will show the roles played by chance, necessity and agency.
To do that, we may summarize the “in-practice EF” a bit more precisely as:
1] Observe an object, system, event or situation,
identifying key aspects.
2] For each such aspect, identify if there is
high/low contingency. (If low, seek to identify and characterize the relevant law(s) at work.)
3] For high contingency, identify if there is complexity + specification. (If there is no recognizable independent specification and/or the aspect is insufficiently complex relative to the universal probability bound, chance cannot be ruled out as the dominant factor; and
it is the default explanation for high contingency. [Also, one may then try to characterize the relevant probability distribution.])
4]
Where CSI is present, design is inferred as
the best current explanation for the relevant aspect; as there is abundant empirical support for that inference. (One may then try to infer the possible purposes, identify candidate designers, and may even
reverse-engineer the design (e.g. using
TRIZ), etc. [This is one reason why inferring design does not “stop” either scientific investigation or creative invention. Indeed, given their motto “thinking God's thoughts after him,” the founders of modern science were trying to reverse-engineer what they understood to be God's creation.])
5] On completing the exercise for the set of key aspects,
compose an overall explanatory narrative for the object, event, system or situation that incorporates aspects dominated by law-like necessity, chance and design. (Such may include recommendations for onward investigations and/or applications.)