K

keivn

-32 karmaJoined

Comments
6

case in point: you urging that i fall back on convention (“standard essay format”) to conform to this community. in fact, it’s precisely why i even used the llm in the first place. 

why is it that a raw, unrefined post albeit able to make a clear argument, cite sources, etc should be rejected because of formatting? it would appear optics are more important than ideas, no?

i think your comment highlights exactly what i’m trying to get at: 

“… a different forum might be a better fit for your style.”

“its tone was probably out of step with how we talk, etc. Downvoting a comment like that amounts to 'this is not to my tastes and I want to talk about something else.’“

ea is a community with the power to influence research/ policy/ etc with real world implications — to dismiss ideas you simply don’t care for is dangerous in this context. especially when, for example, it is posited that unsafe ai is already here, and ai development arguably has cascading effects/ impacts/ implications on all these other areas of concern on ea — to fail to make an argument for why this is unfounded or incorrect appears as negligence and ultimately a failure of the “better” ea aims to bring about. if it’s been harped on before and addressed, why not then point someone new or misguided in the right direction? discourse/ conversations is how mutual collective progress is made, not by a small few deeming what is worthy or not. 

yes - my writing tends to start out as a loose collection of thoughts/ ponderings that i then try to flesh out more clearly of what i’m trying to get at, and to draw a clearly through line in my logic. i don’t think there is anything wrong using assistance as long as the core ideas/ arguments aren’t being artificially generated - i do not do this. to be fair, i assume a loose collection of thoughts would not be well received given what i’ve seen posted here but i can test that out and see if what i have to say is received any better. 

thanks for the comment - I’ll look into the key phrases you mentioned. i guess i’m kind of surprised that if it’s been discussed before, there doesn’t appear to be urgency around addressing it - it seems pretty immediate to me if unsafe ai is already here as opposed to hypothetical, no?

great job tying history, lived experience, and analogies together to give a different, overlooked (if not completely ignored) perspective on longtermism

i’ve experienced the same issues having recently joined the EA forum and other similar communities - there is a homogeneity that is rewarded, while more fringe content  is rejected

There's a multiplier effect. People who benefit from these programs often go on to train and mentor others.”

^ issue is that this remains researcher and engineer centric

it sounds like what is needed are people who are skilled in navigating ambiguity, especially in a frontier context like this (unlikely to be your avg researcher or engineer)

outside of the box thinking in this domain requires outside of the box talent