Hide table of contents

2

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

There's a 'should' either stated or implied.

'If you add 1 to 1 you should get 2' is not a statement people would necessarily consider normative.

1
Vynn
Why is it not considered normative? It follows rules of arithmetic. The operation should be carried out according to "correct" procedure and failure to do so results in something "wrong". So why no count as normative?
1
Arepo
You could make a case that it is a normative statement - certainly not everyone would consider it not to be. It would have been clearer if I'd phrased my response as a question: 'would you consider that statement to be normative?' My sense is that you have a pretty good idea of how philosophers use the word 'normative', and you're pursuing a level of clarity about it that's impossible to obtain. Since it (by definition) doesn't map to anything in the physical or mathematical worlds, and arguably even if it did, it just isn't possible to identify a class of phenomena with which you could concretely associate the word. It's a convenience notion moral realists use to gesture at what they hope are sufficiently shared concepts. If you're sceptical that it succeeds, maybe you just aren't a moral realist...
1
Vynn
Yup
1
Arepo
Self-pimp: http://www.valence-utilitarianism.com/posts/moral-exclusivism
1
Emrik
Oh, I like this. Seems good to have a word for it, because it's a set of constraints that a lot of us try to fit our morality into. We don't want it to have logical contradictions. Seems icky. Though it does make me wonder what exactly I mean by 'logical contradiction'. 
1
Leo
Aristotle would answer "'should' is said in many ways". I was of course thinking of the normative 'should', which I believe is the first that comes to mind when someone asks about normative sentences. But I'd be highly interested in a different kind of counterexample: a normative sentence without a 'should' stated or implied.
2
Arepo
Defining a normative statement as 'a statement with a normative "should"' has certain problems...
1
Leo
That's true, but that comment was only meant for you, who seemed confused about what kind of  'should' you should use in a normative sentence. I took for granted that you already knew 'normative', because you had posted a nice and useful answer to the original question.  

Do "must" and "may" imply a should?

Some confusion in that:

In economics “Normative Econ” often means axiom based approach. State “reasonable conditions on preferences and production functions” derive necessary implications. See… most of books like Mas Colell et al.

In common parlance, maybe in psych, I’ve hear “normative behaviour” used to mean something like “typical, normal, socially acceptable behaviour”

Can you tell me where "normative behaviour" and "typical behaviour" have been conflated because I'm very sure that's a big no no even in social/psychological sciences

2
david_reinstein
Just remembering I have seen it but maybe it was in common parlance but not on social science.

I don't think there's a perfect answer, but as a heuristic I defer to the logical positivists - if you can't even in principle find direct evidence for or against the statement by observing the physical world and you can't mathematically prove it, and on top of that it sounds like a statement about behaviour or action, then you're probably in normland.

would ontological statements which can't be proven by observation also count as normative statements? e.g. I am real, the world is real, I am not real, the self is not real etc.

2
Arepo
I'm not sure how to interpret 'real' there. If you mean 'real' as opposed to something like a hologram, I'd say the sentence is underdefined. If you mean it as synonymous for a proposition about physical state, such that 'there are two oranges in front of me' would be approximately equivalent to 'the two oranges in front of me are real' , then I think you're asking about any proposition about physical state. In which case I don't think there's much reason to call them 'normative', no statement can be proven by physical observation, so that would make basically all parseable statements normative, which would make the term useless. Although I'm sympathetic to the idea that it is.
Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under