I'm the Founder and Co-director of The Unjournal; We organize and fund public journal-independent feedback, rating, and evaluation of hosted papers and dynamically-presented research projects. We will focus on work that is highly relevant to global priorities (especially in economics, social science, and impact evaluation). We will encourage better research by making it easier for researchers to get feedback and credible ratings on their work.
Previously I was a Senior Economist at Rethink Priorities, and before that n Economics lecturer/professor for 15 years.
I'm working to impact EA fundraising and marketing; see https://bit.ly/eamtt
And projects bridging EA, academia, and open science.. see bit.ly/eaprojects
My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.
Podcasts: "Found in the Struce" https://anchor.fm/david-reinstein
and the EA Forum podcast: https://anchor.fm/ea-forum-podcast (co-founder, regular reader)
Twitter: @givingtools
I agree there's a lot of diversity across non-profit goals and thus no one-size-fits-all advice will be particularly useful.
I suspect the binding constraint here is people on nonprofit boards are often doing it as a very minor part-time thing and while they may be directly aligned with the mission, they find it hard to prioritize this when there's other tasks and deadlines more directly in their face.
And people on non-profit boards generally cannot get paid, so a lot of our standard cultural instincts tell us not to put a high premium on this.
Of course there can be exceptions when people are passionately aligned with the mission of the organization, and when their role is very much in the public eye. Or when it's a social experience, particularly if there's in-person get-togethers. But for organizations that you and I have led/been involved with, none of these are quite as strong as they might be for the "local afterschool children's program/soup kitchen/pet rescue center". Nor are many of these orgs super high-profile and in the public eye; they're a bit more meta and niche I think.
With this in mind, I think what you're suggesting - that the full-time staff/leader does need to do the agenda and priority setting - makes sense. With other people on the team chiming in with ideas and information the leader is not aware of ... giving sanity checks and communications/comprehensibility feedback ... which in my experience is indeed often very helpful
Did a decent job for this academic paper, but I think it’s hampered by only having content from Arxiv and various EA/tech forums. Still, it generated some interesting leads.
Prompt:
... find the most relevant authors and work for Observational price variation in scanner data cannot reproduce experimental price elasticities https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4899765 -- we're looking for methodological experts to evaluate this for The Unjournal to inform our pivotal question "How do plant-based products substitute for animal products (welfare footprint)?"
Trying this out for using for various Unjournal.org processes (like prioritizing research, finding potential evaluators, linking research to pivotal questions) and projects (assessing LLM vs human research evaluations). Some initial forays (comming from a conversation with Xyra). I still need to human-check it.
~prompt to Claude code about @Toby_Ord and How Well Does RL Scale?
``Toby Ord's writing -- what do the clusters look like? What other research/experts come closest to his post .... https://forum.effectivealtruism.org/posts/TysuCdgwDnQjH3LyY/how-well-does-rl-scale``
This interactive visual (with some extra prompts)
| Author | Distance | Key Work |
|---|---|---|
| 1a3orn | 0.143 | Parameter Scaling Comes for RL, Maybe; New Scaling Laws for LLMs |
| Pablo Villalobos | 0.150 | Trading off compute in training and inference |
| Matrice Jacobine | 0.168 | Does RL Really Incentivize Reasoning Capacity in LLMs? |
| Lukas Finnveden | 0.169 | Before smart AI, there will be many mediocre or specialized AIs |
| ryan_greenblatt | 0.179 | What’s going on with AI progress and trends? |
| Ryan Kidd | 0.188 | Implications of the inference scaling paradigm for AI safety |
@1a3orn @Pablo Villalobos @Matrice Jacobine🔸🏳️⚧️ @Lukas Finnveden @Ryan Greenblatt @Ryan Kidd -- if you have a chance, let me know if this is accurate/relevant.
| Paper | Authors | Distance |
|---|---|---|
| The Art of Scaling RL Compute for LLMs | Khatri et al. | 0.125 |
| Webscale-RL: Automated Data Pipeline for Scaling RL | Cen et al. | 0.142 |
| AReaL: Large-Scale Asynchronous RL for Language Reasoning | Fu et al. | 0.142 |
| Does RLHF Scale? | Hou et al. | 0.162 |
Based on semantic analysis, Toby Ord’s writing appears to fall into three main thematic clusters.
Key implication: This likely lengthens AI timelines and affects governance and safety strategies.
Topics include:
Topics include:
At the Unjournal we have a YouTube channel and I'm keen to produce more videos both about our process and the case for our model, and about the content of the research we evaluate and the pivotal questions we consider, which are generally EA-adjacent. This includes explainer videos, making the case videos, interviews/debates, etc.
But as you and most organizations probably realize, it's challenging and very time-consuming to produce high-quality videos, particularly in terms of creating and synchronizing images, sound, and video editing, etc. Without a dedicated communications/media person with time and skills to do this, it's a big ask. I've tried a bit to do this with simple tools like iMovie as you'll see and I've experimented with AI-powered tools like Descript but I haven't found these easy to adopt. Maybe there'll be a rapid improvement coming soon making it much easier.
Anyways I see value in some resource sharing skills and tips and templates for this among EA/aligned organizations; looking for suggestions and links.
Should be fixed now, thanks. Problem was because I started by duplicating the first one and then adjusting the text. But the text only showed changed on my end (NB: @EA Forum Team )
Some notes/takes:
The Effective Giving/EA Marketing project was going fairly strong, making some progress and also some limitations. But I wouldn't take the ~shutdown/pause as strong evidence against this approach. I'd diagnose it as:
1. Some disruption from changes in emphasis/agenda at a few points in the project, driven by the changing priorities in EA at the time, first towards "growing EA rather than fundraising" (Let's stop saying 'funding overhang', etc.) and then somewhat back in the other direction after the collapse of FTX
2. I got a grant and encouragement to pursue Unjournal.org full time, so I put this on hold, and no one stepped up to take it on in its current form
3. Some aspects may be partially covered by other initiatives such as @Lucas Moore 's work bringing together EA giving/fundraising orgs at Giving What We Can and marketing initiatives like @Good Impressions -- some of these are discussed and listed within that resource
(Let's have a chat -- I will dm.)
Here's the Unjournal evaluation package
A version of this work has been published in the International Journal of Forecasting under the title "Subjective-probability forecasts of existential risk: Initial results from a hybrid persuasion-forecasting tournament"
We're working to track our impact on evaluated research (see coda.io/d/Unjournal-...) So We asked Claude 4.5 to consider the differences across paper versions, how they related to the Unjournal evaluator suggestions, and whether this was likely to have been causal.
See Claude's report here coming from the prompts here. Claude's assessment: Some changes seemed to potentially reflect the evaluators' comments. Several ~major suggestions were not implemented, such as the desire for further statistical metrics and inference.
Maybe the evaluators (or Claude) got this wrong, or these changes were not warranted under the circumstances We're all about an open research conversation, and we invite the authors' (and others') responses.
Project Idea: 'Cost to save a life' interactive calculator promotion
What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.
This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator?
The case
While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive in the way I suggest above, and I doubt they market it heavily.
GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication). But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it.
It could also mesh well with academic-linked research so I may have some ‘Meta academic support ads’ funds that could work with this.
Tags/backlinks (~testing out this new feature)
@GiveWell @Giving What We Can
Projects I'd like to see
EA Projects I'd Like to See
Idea: Curated database of quick-win tangible, attributable projects