I am a reader looking for knowledge.
I am actively looking for opportunities to bring change in society and make it my profession.
Reach out to me for anything. If I can help, I definitely will.
I agree with the proposal of University groups as impact-driven truth-seeking teams and mentioned a few of my observations corresponding to your comment. Of course, it can work out. I tried to think about some of the reasons behind the same ambiguity you mentioned. It is just my two cents. I, too, consider the importance of participation above all.
As someone who has first-hand experience with many points mentioned in the post: I can say that the current state of college-level EA groups is fairly limited to theoretical conduct rather than actions. I can guess that there might be multiple reasons, but I can mention some that I have personally observed:
First-hand experience.
(Without going through part 2 and the mathematics)
After going through part 2: Good job! I hope to see this model in use sooner or later!
Thanks for writing such a useful post!
Upon observing the pattern, I can see why you stated in the conclusion that the graph would be mostly quite obvious. The high-intensity pain with maximum time reduction would always move to the Pareto frontier. This would be directly responsible for the pain of Disabling variety—the motive of most factory farming practices, for reduced manufacturing costs and efficient storage.
I think the mentioned Multi-Objective model can be quite handy. We can incorporate it for cost-effective intervention analyses. But I think we can also use it to achieve the prerequisites required for estimating better Pareto frontier data.
Of course, Behaviour is probably a good indicator of pain, as the evolutionary point of pain is to change behaviour. One caveat though is change in behavioral patterns after long periodical treatments. For example: the case of cattle would be different from hens and chickens. That data can be obtained from authentic monitoring and testing (a fundamental bottleneck).
P.S. I think this is an important consideration.
so far this has had limited success due to the scarcity of relevant studies on humans, not to mention species of farmed animals.
Somebody (or a bunch of somebodies) can only try to come forward to take action. But I am afraid that's what they tried to do.
Here, "they" refers to folks from OpenAI who tried to come forward and do something about Sam's manipulative behavior or lies or whatever was happening. Anyone who may potentially provide the leaks or shed some light.
It was like the first necessary crisis (the sooner, the better) for later events to unfold. I am unsure about their nature.
Here, I am unsure about the nature of the events.
I hope it is clear now.
I am not sure if leaks are a reliable source in these cases. For one, these instances don't have material evidence. Somebody (or a bunch of somebodies) can only try to come forward to take action. But I am afraid that's what they tried to do. It was like the first necessary crisis (the sooner, the better) for later events to unfold. I am unsure about their nature. Partially based on the new board's current update on choosing the new members.
No they didn't, and it looks like we aren't going to see the investigation, unless somebody leaks it.
I might be untapped with the latest update here, but did they release the reason behind Altman's firing? I don't think it was ever answered by him in the subsequent interviews. Gradually questions died down or probably dropped from the questioner's list due to a clause, maybe. Now that he is back at the table,[1]I think it has become more urgent to get the original motivations out.
Workers' rights are usually under the umbrella of systematic violations of rights, a term usually associated with Human rights. We can use similar pointers and forecast questions/solutions. Some would overlap with data mining and fair use —which are hardly followed. It is not very hard for an average company to see the pivots created by OpenAI's crisis management team. OpenAI research leads say their recent model is trained on a combination of data that's publicly available as well as data that OpenAI has licensed, but they can't go into much detail on it.
The last part is no easy feat for anyone to dive into. This conversation came out less than two days ago and seemed quite intentional. We can safely assume that this is going to be the new norm for addressing lawsuits. It is admissible in all the formal proceedings, after all. It is important to note that, statements like: in some ways, we really see modeling reality as the first step to be able to transcend it, are meticulously said in the end. I don't think anyone would want to deal with them and get stuck in an expensive limbo beyond control, which OpenAI can afford.
Always love to see book summaries on the forum! The audio feature compliments them well enough...