N

NickLaing

CEO and Co-Founder @ OneDay Health
13678 karmaJoined Working (6-15 years)Gulu, Ugandaonedayhealth.org

Bio

Participation
1

I'm a doctor working towards the dream that every human will have access to high quality healthcare.  I'm a medic and director of OneDay Health, which has launched 53 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community  in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.

How I can help others

Understanding the NGO industrial complex, and how aid really works (or doesn't) in Northern Uganda 
Global health knowledge
 

Comments
1752

Thanks for the update, and the reasons for the name change make s lot of sense

Instinctively i don't love the new name. The word "coefficient" sounds mathsy/nerdy/complicated, while most people don't know what the word coefficient actually means. The reasoning behind the name does resonate through and i can understand the appeal.

But my instincts are probably wrong though if you've been working with an agency and the team likes it too.

All the best for the future Coefficient Giving!

Thanks @mal_graham🔸  this is super helpful and makes more sense now. I think it would make your argument far more complete if you put something like your third and fourth paragraphs here in your main article. 

And no I'm personally not worried about interventions being ecologically inert. 

As a side note its interesting that you aren't putting much effort into making interventions happen yet - my loose advice would be to get started trying some things. I get that you're trying to build a field, but to have real-world proof of this tractability it might be better to try something sooner rather than later? Otherwise it will remain theory. I'm not too fussed about arguing whether an intervention will be difficult or not - in general I think we are likely to underestimate how difficult an intervention might be.

Show me a couple of relatively easy wins (even small-ish ones) an I'll be right on board :).

100% agree this would be the best solution. Unfortunately in almost all African Countries, perceived sovereignty and not "bowing the knee" to the West is put at far higher premium than things like drug approvals. This would be scoffed at accross the continent for this reason.

To be fair it's not like high income country's are doing great at getting their approvals sorted, partly for similar pride reasons.

@Vasco Grilo🔸 have a look at my latest reply to @Ben_West🔸 below. I think there is a worldview where it's important and "good" to know who wrote the words, and who we are interacting with. I think we might even start to see legislation and guidelines which demand disclosure of who wrote what. Before AI, it was just assumed that all our words were our own. There are exceptions in human norms to this like having a "ghost writer" but I think that's ethically wrong too.

Putting aside whether it's "good" or "bad" for something to be written by an AI, and putting aside the question of quality, at the very least given it's hard to detect if a human writes it or not, I think as a human readers should have the right to know who they are interacting with. Is it a human? Is it an AI? Is it a mix of both and how did they mix?

I think your perspective is reasonable here, it's just not what's important to me. Genuine unfiltered human interaction is important to me. Knowing that I'm talking with someone without an AI in between is important to me. If that's not important to you that's fine. This is important to me not only because I value true direct human interaction, but also (as a secondary problem) because I think AI writing is samey and boring. Maintaining a public writing space with true diversity, quirkiness and strong voices is part of what drives engagement and excitement.

When I see your name on something, I want it to be 100% your voice and your words like we are talking in a public space. Or at the very least I want you to tell me if it's not. If you're not concerned with that, then we have a fundamental almost axiomatic difference about what matters in a forum like this. I think that's part of the reason why there's a bit of a chasm between our views, and those who are happy with AI writing things. The quality of ideas and reasoning is only half of what matters for me. The other half is the discussion and interaction between us - the mingling of our minds. I'm not sure we can resolve this difference. If you genuinely don't mind who's "brain" words came from, and think that other's don't have the right to know that as well, that's reasonable but we may have fundamentally different beliefs.

A human or an AI could do good or bad research, I'm less concerned with that. Karma will sort that out. Karma can't answer the human interaction question above. We can discern from outside whether an argument is good or not. We can't discern from outside whose words they are - that's why we need the start-of-post disclosure at the very least (I would go further). An analogy might be if someone did a bunch of research for you and sent it to you, and then you used half of their words in your post. Ignoring the plagiarism element, that wouldn't be you talking with me it would be someone else which would be dishonest - unless you said "hey this article is half my research assistant's words and half mine).

 I think as a human I have the right to know who I'm interacting with.

Yep that word "moral" was the only dubiously EA coded looking one in your prompts to me. But like you say results seem to hold which is kind of wild...

I don't love the tone of this piece, but I heard him once recently on the Ezra Klein show and he did pretty poorly. He clearly hadn't prepared and acted defensive and incredulous at times. He does really need to work on his public appearances...

Malawi is really, really poor - worse than almost all Sub-saharan landlocked countries even those with previous conflicts and worse institutions. I get what you mean that Malawi could be doing twice as well and still be really poor, but despite that I think the title is just fine!

From what I can see, the main issue here who writes the words,  about how much LLMs are used in the process.

If most of the brainstorming, research and structuring was done by the LLM but you wrote the words yourself, from my perspective that wouldn't require any caveat at all. But if LLM's wrote half of the words than I would definitely want to know at the top of the post (and personally I probably wouldn't read it).

That's why it's so important that we get clear labelling. On this forum we should be able to choose whether or not to read something not written by a human. I would hope that only a minority of posts will have heavy LLM writing, so most posts won't need any disclosure at all.

I completely agree with @Austin that people shouldn't write anything if they use LLMs for feedback and copy editing - like he said they shouldn't have to under this policy. I have seen people stating doing that, but hopefully it will settle down when they realise it isn't necessary.

In the AI frame I remember reading about 3 situations on the forum (one of which was mechanise). I also consider this to a lesser extent around animal sentience arguments from those deep in the animal welfare world.

the most pertinent example for me would be Anthropic's top leadership ditching their solid safety plan with clear red lines for a vague and practically useless one, and the justifications by @Holden Karnofsky (who's wife owns the company) which felt strange to me. He usually makes such compelling arguments, and that one seemed less so. I'm not the most rational person, but Habryka's arguments against the safety plan change on less wrong were compelling to me.

I'm not saying we shouldn't argue the object point, but just that we should consider people's incentives and weight the opinions of those with power/money conflicts of interest somewhat less heavily than those without.

Load more