titotal

Computational Physicist
8328 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
691

I can't see the original, but this is easily clockable as written by AI, in the same style as a thousand other spam posts that pop up occasionally. Whether or not the style is inherently bad, it has been devalued from overuse. 

Part of the appeal of reading a personal reflection is hearing it in somebody's own voice. Don't give that up!

And why, exactly, would you expect every single new development to show up at exactly the right time as to make the overall curve remain exponential? What your view actually predicts is that progress will be a series of S-curves... but it says nothing about how long the flat bits in between will be. 

Even within the history of AI, we have seen S-curves flatten out: there have been AI winters that lasted literal decades. 

This concept appears to have been adapted from a George RR martin quote:

George R.R. Martin > Quotes > Quotable Quote
 

 (?)
George R.R. Martin
“I think there are two types of writers, the architects and the gardeners. The architects plan everything ahead of time, like an architect building a house. They know how many rooms are going to be in the house, what kind of roof they're going to have, where the wires are going to run, what kind of plumbing there's going to be. They have the whole thing designed and blueprinted out before they even nail the first board up. The gardeners dig a hole, drop in a seed and water it. They kind of know what seed it is, they know if planted a fantasy seed or mystery seed or whatever. But as the plant comes up and they water it, they don't know how many branches it's going to have, they find out as it grows. And I'm much more a gardener than an architect.”

― George R.R. Martin

The AI didn't grow a seed or build a house: it ripped off the work of an actual person without giving that person credit. Which is unfortunately one of the main uses for LLM's right now. 

Yes, it's important to take into account that this is the finding of one study, whereas the mosquito net results come from a much more rigorous cochrane metastudy of many different studies. 

Do you have more reasons to be skeptical of the 47% figure? After all, with 1000 bucks the household would be able to buy all the other interventions. 

They do point out that the 30 million dollars was spread out among everyone, not just pregnant women. They take a guess at what the cost per life saved would be if it was targeted specifically at pregnant women:

targeting UCTs to women in the third trimester of pregnancy under these assumptions
would cost about USD PPP 92,000 (or $39,000 in nominal dollars) per child death averted.

We should get more data on the actual cost-effectiveness in a while from the targeted givedirectly work. 

I'm genuinely asking, is this meant to be satire? Like, are you trying to critique the concept of "TESCREALISM" by creating a purpotedly equally absurd framework in the other direction? 

I found his page on the actual Santa clara law website, and it specifically mentioned that he founded the chatgpt blog in question. So it looks like he is legitimately a qualified law professor and from his profile it looks like he does specialise in IP law stuff.  

On the other hand, the blog has posts with questionable methodology like asking chatgpt for probabilities of lawsuit outcomes

I would like to hear from other IP law specialists. 

When people talk about women's negative experiences in EA, they act as if it happens because men just don't care about women's feelings.

I opened the second example you cited, and they explictly deny the framework you are offering here. I'll quote in full because I think it's relevant here:

It’s interesting, because in these instances, I’m never talking about intention. I’m never saying, “this person condescends me because they are sexist” or “this person touches me because they are malicious.” And yet, immediately, a charitable intention is proposed to me. An explanation offered, the action is defended. Lest I start getting any ideas of even daring to suggest ill intent.

But I don’t care that much about intent anymore, because I’ve learned it’s a losing game. I don’t care if they are autistic or traumatised or delusional or shy or if free will exists or doesn’t. At a point, we’ve just lost the plot entirely. I am identifying an action that I want stopped. I do not need to have my empathy invoked. I naturally have immense empathy—often to my own detriment, often to a far greater degree than the “intention explorers” I’m conversing with. I am voicing a hurt and a need. And a helpful solution might be as simple as giving someone feedback. Or even just offering recognition.

The outcomes of your actions matter more than your intent. If it got to the point of you being banned from EA spaces, probably your actions had quite negative outcomes. In that case it is your responsibility to manage your behaviour to prevent causing those negative outcomes again, and it is the communities responsibility to prevent you and others from doing the same.  

No worries, I'm glad you find these critiques helpful!

I think the identical clone thing is an interesting thought experiment, and one that perhaps reveals some differences in worldview. I think duplicating Ava a couple of times would lead to roughly linear increase in output, sure: but if you kept duplicating you'd run into diminishing returns. A large software company who's engineers were entirely replaced with Ava's would be a literal groupthink factory: all of the blindspots and biases of Ava would be completely entrenched and make the whole enterprise brittle. 

I think the push and pull of different personalities is essential to creative production in science: If you look at the history of scientific developments progress is rarely the work of a single genius: more typically it driven by collaborations and fierce disagreements. 

With regards to comment 1: yeah "accuracy" is an imperfect proxy, but I think it makes more sense than "number of tasks done" as a measure of algorithmic progress. This seems like an area where quality matters more than quantity. If I'm using Chatgpt to generate ideas for a research project, will running five different instances lead to the final ideas being five times as good? 

I feel like there's a hidden assumption here that AI will at some point switch from acting like LLM's act in reality to acting like a "little guy in the computer". I don't think this is the case, I think AI may end up having different advantages and disadvantages when compared to human researchers. 

More generally, I take issue in your  with the idea that the number of "AI researchers" scales linearly with effective compute (gamma = 1 is put forward as your default hypothesis), and that these "AI researchers" can be assumed to have the same attributes as human researchers, like their beta value. 

If you double the thinking time of chatgpt, or their training time, do you get twice as good results? Empirically no. From openAI themselves, you need exponential increases in compute to get linear results in accuracy:

Running two AI systems in parallel is just not the same as hiring two different researchers. Each researcher brings with them new ideas, training, and backgrounds: while each AI is an identical clone. If you think this will change in the future that's fine, but it's a pretty big assumption imo. 

Load more