Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
126
· · 25m read

Posts tagged community

Quick takes

Show community
View more
23
bruce
11h
0
Reposting from LessWrong, for people who might be less active there:[1] TL;DR * FrontierMath was funded by OpenAI[2] * This was not publicly disclosed until December 20th, the date of OpenAI's o3 announcement, including in earlier versions of the arXiv paper where this was eventually made public. * There was allegedly no active communication about this funding to the mathematicians contributing to the project before December 20th, due to the NDAs Epoch signed, but also no communication after the 20th, once the NDAs had expired. * OP claims that "I have heard second-hand that OpenAI does have access to exercises and answers and that they use them for validation. I am not aware of an agreement between Epoch AI and OpenAI that prohibits using this dataset for training if they wanted to, and have slight evidence against such an agreement existing." Tamay's response: * Seems to have confirmed the OpenAI funding + NDA restrictions * Claims OpenAI has "access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities." * They also have "a verbal agreement that these materials will not be used in model training."   ============ Some quick uncertainties I had: * What steps did Epoch take or consider taking to improve transparency between the time they were offered the NDA and the time of signing the NDA? * What is Epoch's level of confidence that OpenAI will keep to their verbal agreement to not use these materials in model training, both in some technically true sense, and in a broader interpretation of an agreement? (see e.g. bottom paragraph of Ozzi's comment). 1. ^ Epistemic status: quickly summarised + liberally copy pasted with ~0 additional fact checking given Tamay's replies in the comment section 2. ^ arXiv v5 (Dec 20th version) "We gratefully acknowledge OpenAI for their support in creating the benchmark."
It seems that part of the reason communism is so widely discredited is the clear contrast between neighboring countries that pursued more free-market policies. This makes me wonder— practicality aside, what would happen if effective altruists concentrated all their global health and development efforts into a single country, using  similar neighboring countries as the comparison group? Given that EA-driven philanthropy accounts for only about 0.02% of total global aid, perhaps the influence EA's approach could have by definitively proving its impact would be greater than trying to maximise the good it does directly.
A minor personal gripe I have with EA is that it seems like the vast majority of the resources are geared towards what could be called young elites, particularly highly successful people from top universities like Harvard and Oxford. For instance, opportunities listed on places like 80,000 Hours are generally the kind of jobs that such people are qualified for, i.e. AI policy at RAND, or AI safety researcher at Anthropic, or something similar that I suspect less than the top 0.001% of human beings would be remotely relevant for. Someone like myself, who graduated from less prestigious schools, or who struggles in small ways to be as high functioning and successful, can feel like we're not competent enough to be useful to the cause areas we care about. I personally have been rejected in the past from both 80,000 Hours career advising, and the Long-Term Future Fund. I know these things are very competitive of course. I don't blame them for it. On paper, my potential and proposed project probably weren't remarkable. The time and money should go to the those who are most likely to make a good impact. I understand this. It just, I guess I just feel like I don't know where I should fit into the EA community. Even just many people on the forum seem incredibly intelligent, thoughtful, kind, and talented. The people at the EA Global I atttended in 2022 were clearly brilliant. In comparison, I just feel inadequate. I wonder if others who don't consider themselves exceptional also find themselves intellectually intimidated by the people here. We do probably need the best of the best to be involved first and foremost, but I think we also need the average, seemingly unremarkable EA sympathetic person to be engaged in some way if we really want to be more than a small community, to be as impactful as possible. Though, maybe I'm just biased to believe that mass movements are historically what led to progress. Maybe a small group of elites leading the charge is actually what i
EAG Bay Area Application Deadline extended to Feb 9th – apply now! We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline! You can find more information on our website.
Best books I've read in 2024 (I want to share, but this doesn't seem relevant enough to EA to justify making a standard forum post. So I'll do it as a quick take instead.) People who know me know that I read a lot, and this is the time of year for retrospectives.[1] Of all the books I read in 2024, I’m sharing the ones that I think an EA-type person would be most interested in, would benefit the most from, etc.  Animal-Focused  There were several animal-focused books I read in 2024. This is the direct result of being a part of an online Animal Advocacy Book Club. I created the book club about a year ago, and it has been helpful in nudging me to read books that I otherwise probably wouldn’t have gotten around to.[2] * Reading Compassion, by the Pound: The Economics of Farm Animal Welfare was a bit of a slog, but I loved that there were actual data and frameworks and measurements, rather than handwavy references to suffering. The authors provided formulas, the provided estimates and back-of-the-envelope calculations, and did an excellent job looking at farm animal welfare like economists and considering tradeoffs, with far less bias than anything else I’ve ever read on animals. They created and references measurements for pig welfare, cow welfare, and chicken welfare that I hadn’t encountered anywhere else. I haven’t even seen other people attempt to put together measurements to evaluate what the overall cost and benefit would be to enact a particular change in how farm animals are treated. * Every couple of pages in An Immense World: How Animal Senses Reveal the Hidden Realms Around Us I felt myself thinking “whoa, that is so cool.” Part of the awe and pleasure in reading this book was a bunch of factoids about how different species of animals perceive the world in incredibly different ways, ranging from the familiar (sight, hearing, touch) to the exotic (vibration detection, taste buds all over the body, electrolocation, and more). The author does a great jo