Don’t know if I am preaching to the choir, but with how much libs try to use the trolley problem to support their favorite war criminal, it got me thinking just how cringe utilitarianism is.
Whatever utilitarianism may be in theory, in practice, it just trains people to think like bureaucrats who belive themselves to be impartial observers of society (not true), holding power over the lives of others for the sake of the common good. It’s imo a perfect distillation of bourgeois ideology into a theory of ethics. It’s a theory of ethics from the pov of a statesman or a capitalist. Only those groups of people have the power and information necessary to actually act in a meaningfully utilitarian manner.
It’s also note worthy just how prone to creating false dichotomies and ignoring historical context utilitarians are. Although this might just be the result of the trolley problem being so popular.
I take it you are thinking about act utilitarianism, right?
Utilitarianism and consequentialism in general, really. If one were to develop a utilitarian code of ethics, it would just be the same as a deontological code of ethics with the relevant rules. There isn’t much actual difference between ‘you have a duty to do what maximises happiness’, and ‘an action the consequences of which include maximisation of happiness is good’, and ‘a rule of ethics, satisfaction of which leads to maximisation of happiness is good, and so are the relevant actions’, etc.
Yeah this. As well as, in practice, humans tend to act first, then create ethical justification post-hoc. Even if we were able to create a totally ethical system, it would be unlikely to ever actually be applied correctly.
That doesn’t mean we shouldn’t be attempting to create better things, or more ethical outcomes, it just means that getting bogged down in ethical arguments is pointless. To quote Parenti, I support the revolution that feeds and teaches the children.
Depends. Some deontological systems can get really strict (ex - Kantian), and any 2 different utilitarians would probably have significant disagreements on how to actually calculate utility. Maybe it won’t look too different in ethics, but in government policy the moral systems would produce very different outcomes.
For example, a scenario I was told about when learning about this stuff was that of a city government commissioning a factory in a poor neighbourhood. The factory will bring them jobs, but will negatively impact their health. How the situation should be approached is very different for the moral systems. Even utilitarians amongst each other have difficulty deciding how to weight incomensurate things like health vs jobs.
Sure. However, that’s not relevant, as the claim is that a consequentialist (or mixed, for that matter) code of ethics is isomorphic to some deontological one, and not the other way around.
Not sure why you are bringing that up, considering that that was neither in question, nor is it relevant to whether or not a consequentialist code of ethics can just be rewritten as a deontological one.
Not sure what you mean by this. Care to provide an example where two codes - one deontological and one utilitarian non-deontological that label the same actions as ‘good’ and the same actions as ‘bad’ would produce a different result depending on which a government (or any organisation, for that matter) would subscribe to?
It differs between different systems of morality, yes, but that doesn’t actually refute my claim that every consequentialist or mixed consequentialist-deontological system of morality is isomorphic to some deontological one.
I don’t think you are using the word deontological correctly here. A deontological theory is one where you have a moral obligation based on the type of action you have performed rather than its concequencues
Now you could theoretically make a theory where you first use utilitarianism (and an all knowing computer) to determine the goodness of any possible action, then make a deontological imperative to do those actions at the times and location where they produce good results. I have thought about how to make the 2 theories compatible as well.
However, meaningfully, for a human with limited knowledge, utilitarianism and deontology aren’t going to be isomorphic unless you really strecht the meanings of those 2 systems. A human will never be able to come up with a deontological ruleset rich enough to maximise utility in every possible situation they will encounter.
I think it would be more accurate to say that utilitarianism in general can simulate deontology by assigning utilities to the type of action a person performs. For the lack of a better term, I would consider utilitarianism a “Turing complete” moral theory, while deontology would be closer to combinatorial logic in moral terms.
The thing is, we can always make an axiom of a consequentialist code of ethics (just in case, I use expressions ‘system of morality’, ‘morality system’, ‘code of ethics’ interchangeably) into an axiom of an equivalent deontological code of ethics by just saying that, (I am going to use square brackets ‘[’, ‘]’ to denote parts of the ‘instead’ clause for better clarity here) instead of [an action being good because it has such-and-such consequences], [you have a duty to perform actions that are evaluated to have such-and-such consequences]. I suppose, that does mean that it is possible for one action that is not evaluated to have particular consequences to lead to those consequences, and for an action that was evaluated a priori to lead to particular consequences to not actually lead to those consequences, and this is a refutation of my original claim, as these systems can end up with different a posteriori descriptions.
However, I do posit that, in a sense, there is still no significant difference in how deontological and consequentialist systems of morality work prescriptively, as you can’t actually know the future with absolute certainty, and every principled subscriber to a consequentialist code of ethics is going to act in accordance to what I previously called an ‘equivalent deontological code of ethics’ - they will try to evaluate an action’s consequences a priori, and act in accordance with said predictions.
I mean, you won’t be finding many people who actually adopt explicit codes of ethics in general, whether deontological, consequentialist, virtue-ist, or a mix of any of those, especially one that they would actually follow.
Also, I’m not sure why you think we can’t just find a deontological code of ethics from a given utilitarian one. You basically just add ‘you have a duty to do actions that satisfy such-and-such criteria for good actions in this given utilitarian code of ethics’.
I mean, we can also find a deontological code of ethics equivalent to a given utilitarian one using the method that I have outlined previously. It can even be done with just one additional axiom.
There are better analogies that would involve just using set theory, if I understand correctly what you are trying to say - that every deontological code of ethics has an equivalent utilitarian code of ethics, and not vice versa. I disagree, as I have provided a method for finding a deontological code of ethics that is equivalent to a given utilitarian one.
Also, neither utilitarian, nor deontological codes of ethics intersect well with virtue-based codes of ethics, as those tell us whether people are good or bad, and not whether actions are good or bad.
Yeah, this is just theoretical. I am pretty much assuming we are talking about computers calculating morality here rather than actual people.
Yeah, scratch my analogy. It’s actually kind of terrible.
Well, the latter is just called act utilitarianism, which is more or less any moral system which is deontological but tries to approximate utilitarianism. Basically, you create a deontological ruleset which tries to predict in advance what maximises utility.
It only approximates utilitarianism as I see it because once a deontological ruleset is laid out, you can’t change it. If you then encounter an action which will have negative conquerors, but you should do according to your ruleset, you have to do it, or else you are just doing utilitarianism and calling it deontology.
You can improve the approximation arbitrarily by making a richer and richer ruleset, but this requires more and more knowledge and computing beforehand.
Given that, under that system, you have a duty to do something, it is deontological.
It’s not really predicting anything, not necessarily. Who or what evaluates the possible consequences of an action can be determined in a lot of ways that are more granular than just being on a per-rule basis.
The same applies to all the other codes of ethics, considering that they are just systems of logic.
If the predictions regarding the consequences of a particular actions are reevaluated and it is no longer a good action under a given utilitarian code of ethics, it also becomes a non-good action under the equivalent deontological code of ethics/you no longer have a duty to perform it.
I think, you have a very narrow and naive view of deontology, which seems to often be instilled in students when Kantian deontology is taught to them using very primitive examples.
But also, I very much do posit that, as prescriptive systems, deontological ones, consequentialist ones, and deontological-consequentialist mixed ones can’t really be distinguished in any significant manner. As such (provided that we are working with a utilitarian system that does not involve any elements of virtue-based codes of ethics), I can just say ‘utilitarianism is just deontology in a trench coat’.
I have provided a method for finding an equivalent deontological code of ethics that differs from the original one in terms of the cardinality of the set of rules by an addition of just one axiom.