Interestingly, the main point of Schliesser's post seems to be to elucidate the sometimes "very pernicious policy consequences" of philosophic activity. He suggests that most philosophers likely wouldn't sign on to a professional code of ethics like that of engineers, which obligates engineers first and foremost to "hold paramount the safety, health and welfare of the public." Why not? Because the "safety, health and welfare of the public" is not the paramount virtue of philosophic activity. Schliesser writes (my hotlinks added):
Let me offer two (controversial) examples: i) a great deal of philosophic sophistication is regularly deployed in order to clarify the doctrine of double effect. In practice the main function of the principle, however elegant, is to be a rhetorical fig-leaf to let politicians and generals morally off-the-hook for atrocious deeds. ii) Plenty of prominent philosophers are engaged in projects that facilitate the development of causal discovery software to be used in expert systems with the (foreseeable) dual use to fight, say, cancer or annihilate enormous number of innocent people deemed enemy by the government (etc.).So, in the case of the principle of double effect, Schliesser rightly notes that the products of philosophic activity quite often provide excuses for morally objectionable acts. In the second case, Schliesser worries that too many philosophers are willing to under-emphasize or overlook problems endemic to causal discovery algorithms-- e.g., the problem of ignorance and the problem of inconsistency-- in effect providing philosophical sanction for the development of systems that are predictably antithetical to the "safety, health and welfare of the public" (even if the "double effect" of those systems is positive). He asks: should we be more willing to hold each other responsible for the foreseeable public impact of our words or shared standards?
Should we be more willing to hold each other responsible for the foreseeable public impact of our words or shared standards?
I can't imagine that many philosophers would answer "no" to Schliesser's question. That general agreement notwithstanding, I also can't imagine that many philosophers would agree to the formulation of a Professional Code of Ethics of the sort that Schliesser seems to want, either. As he notes, there are several categories of arguments that professional philosophers might have a collective professional interest in disavowing. To make things clearer, let's say they're of three general sorts:
- Untrue Arguments, which may or may not also be morally objectionable. For example, "some ethnic groups are inherently inferior."
- Trivial Arguments, which are not morally objectionable, but rather which tend to (in Schliesser's words) "diminish the beauty or elegance or worthiness of an argument."
- Arguments with Foreseeably Negative Policy Implications. I imagine that arguments that "condone" torture in exceptional circumstances, like the infamous "ticking time-bomb" scenario, would be examples of this sort.
Let me say that I am completely sympathetic with Schliesser's concerns. I think he lays them out well, and I think his examples (in the comments section, which is expanding even as I write this) of philosophic activity with deeply problematic policy implications are spot-on. Here's my issue: what would the putative Philosophers' Ethical Code forbid? Let's return to the general principles of the Engineering Ethics Code for a moment. They are as follows:
- Engineers shall hold paramount the safety, health and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties.
- Engineers shall perform services only in areas of their competence.
- Engineers shall issue public statements only in an objective and truthful manner.
- Engineers shall act in professional matters for each employer or client as faithful agents or trustees, and shall avoid conflicts of interest.
- Engineers shall build their professional reputation on the merit of their services and shall not compete unfairly with others.
- Engineers shall act in such a manner as to uphold and enhance the honor, integrity, and dignity of the engineering profession and shall act with zero-tolerance for bribery, fraud, and corruption.
- Engineers shall continue their professional development throughout their careers, and shall provide opportunities for the professional development of those engineers under their supervision.
I'm not sure what could possibly serve as the First Principle for a Professional Philosopher's Code of Ethics that wouldn't amount to something like declaring "forbidden knowledge." Many years ago, I attended a lecture by my (at that time) dissertation advisor, John D. Caputo, where he was asked to answer the question: is there such a thing as "forbidden" knowledge? Jack answered "no"-- not because there aren't some questions that, if pursued to their philosophical ends, produce knowledge that has foreseeably pernicious consequences for the public, but rather because the pursuit of knowledge per se can't be forbidden. If we can ask a question, Caputo speculated, we will pursue it to its end. In fact, we're already engaged in the pursuit of its end. (See the first sentence of Aristotle's Metaphysics.) I don't think Caputo's point was to say that "anything goes" in philosophical speculation-- a position he, and many other deconstructionists, are often wrongly credited with holding-- but only to say that the fact of the matter is that if we can think it (even as a question, as a possibility), "it" already is in the realm of that for which we are, and ought to be, held intellectually (and, I would add, professionally) responsible. "Forbidding" certain questions, or certain arguments, or certain conclusions, only serves to resign those matters to the realm of the Secret, which poses a far more pernicious danger to the public than bad arguments do, because it removes them from the public space where we hold each other accountable for our arguments and their consequences.
For that reason, I think Schliesser's question-- should we be more willing to hold each other responsible for the foreseeable public impact of our words or shared standards?-- is the MOST important question. But it can only be asked because professional philosophers don't have an ethical code that forbids any particular question from being asked and answered, however incompletely, imperfectly or, in some cases, dangerously.
UPDATE 1/5/12: Schliesser's essay has generated several other critical responses. See Mohan Mathhen's here and here. Joshua Miller's (excellent) criticism is here. Schleisser's rejoinders are here and here.