Aithir

54 karmaJoined

Bio

My philosophical axioms that are relevant for EA are largely utilitarian as long as that doesn't interfere with truthfulness. To be clear though I am not a moral realist!

My interests are:

-forecasting

-animal welfare

-politics (unfortunately)

-intelligence research

Comments
19

EON Systems published a blogpost where they explain what they did accomplish and how and what they did not accomplish.

Apparently an emulated Drosophila melanogaster brain made a fly-like body move in a simulation.

Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move. The ghost is no longer in the machine. The machine is becoming the ghost.

The quote is from this article where you can also see the video. 

I cannot evaluate the claims being made myself. Maybe somebody here can? The company that supposedly did this is called EON Systems and lists Anders Sandberg and Robin Hanson as advisors.

The most EA leaning AI company just took a major loss and Hegseth specifically called out the movement which strongly suggests that the future won't be what EAs would like it to be if he or people like him have a say about it which they will. 

I've never been more proud to be part of the Effective Altruism movement

Why? Because it took the right stance and lost? Because MAGA is bad and dislikes EA? This is what I mean by virtue signaling. I should have been clearer about that in my initial comment.

 

If declining to actively support MAGA's demands they support development of AI with the explicit purpose of being an autonomous killing device is "virtue signalling", what's left of "AI alignment" to pursue?

I would prefer the US to have these capabilities over the US and non democratic-countries or just non-democratic countries having them. Amodei btw. also thinks that it might be necessary in the future to develop such weapons. He said so in this interview. He argues that legislation hasn't cought up with AI and that in the future Congress should decide on it and not a private company. But if you believe that AI improves on an exponential curve then future legislation will very quickly be even more outdated so that argument doesn't really make sense to me...

 

To clarify. I don't think declaring Anthropic a supply chain risk is justified.

The EA movement has a PR problem with basically half the American political spectrum which includes the US President and also Elon Musk whose xAI is currently the fourth most capable American AI company and judging from the comments here, on X and elsewhere the plan seems to be to make it worse. Frankly the discourse isn't "effective" at all, but virtue signaling.

There are areas where almost unbridgeable tensions between the EA and the MAGA movement exist, but AI alignement really doesn't have to be one of them.

A lot of EAs wanted to slow down AGI development to have more time for alignment. Now Trump's tariffs have done that - accidentally and for the wrong reasons - but they did slow it down. Yet no EA seems happy about this. Given how unpopular his tariffs are maybe people don't want to endorse them for PR reasons? But if you think that AI is by far the most important issue that should easily lead you to say the unpopular truth. Scenarios where China reaches AGI before the US became more likely, but that was always an argument against AI slowdown and it didn't seem to convince many people in the past.

Thoughts?

 

Maybe this post should be placed in some AI safety thread, but I wasn't sure where exactly.

Hanania released a new blogpost about his diet. It seems like part of the reason for not going vegan is his lack of self control. You can eat less calories and more protein on a vegan diet than he does on a nonvegan one.

I'm snacking on chocolate covered almonds all day.

Is there any charity that works on shrimp welfare that you would consider worth giving to based on this analysis? My preferred charity currently is The Humane League.

I thought it might be helpful to share this article. The title speaks for itself.

 

How to Legalize Prediction Markets

What you (yes, you) can do to move humanity forward

So far his e-mail has gotten relatively little media attention, his English Wikipedia page was changed (not the German one though) and there was little social media outrage.

This seems like a pretty good outcome for him. The reasons I can think of why that happened.

  1.  His strategy of preemptively publishing the e-mails worked.
  2.  He has no social media presence which would be the natural place for people to pile up on him.
  3.  He might simply have gotten lucky.
Answer by Aithir1
4
9

EA should deprioritize human welfare causes i.e. global health (unless it is an existential risk) and global poverty.

Load more