Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
Browser extensions are almost[1] never widely adopted.
Whenever anyone reminds me of this by proposing the annotations everywhere concept again, I remember that the root of the problem is distribution. You can propose it, you can even build it, but it wont be delivered to people. It should be. There are ways of designing computers/a better web where rollout would just happen.
That's what I want to build.
Software mostly isn't extensible, or where it is, it's not extensible enough (even web browsers aren't as extensible as they need to be! Chrome have started sabotaging adblock btw!!). The extensions aren't managed collectively (Chrome would block any such proposal under the pretence that it's a security risk), so features that are only useful if everyone has them just can't come into existence. We continue to design under the assumption that ordinary people are supposed to know what they want before they've tried it.
There are underlying reasons for this: There isn't a flexible shared data model that app components can all communicate through, so there's a limit to what can be built, and how extensible any app can be. Currently, no platform supports sandboxed embedded/integrated components well.
So I started work there.
And then that led to the realization that there is no high level programming language that would be directly compatible with the ideal data model/type system for a composable web (mainly because none of them handle field name collision), so that's where we're at now, programming language design[2]. We also kinda need to do a programming language due to various shortcomings in wasm iirc.
But the adoption pathway is, make better apps for all of the core/serious/actually good things people do with the internet (blogging, social feeds, chat, reddit, wiki, notetaking stuff) (I already wanted to do this), make it crawlable for search engines, get people to transition to this other web that's much more extensible in the same way they'd transition to any new social network.
And then features like this can just grow.
Well, I just checked, apparently like 30% of internet users use ad blockers, that's shockingly hearteningly high, even mobile adoption is only half that. On the other hand, that's just ad blockers, and 30% isn't that good for something with universal appeal that's essentially been advertised for 30 years straight.
It initially seemed like LLM coding might make it harder to launch new programming languages, but nothing worked out the way people were expecting and I think they actually make it way easier. They can write your vscode integration, they can port libraries from other languages, they help people to learn the new language/completely bypass the need to learn the language by letting users code in english then translating it for them.
A much cheaper and less dangerous approach: Just don't delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we're better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn't guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempting takeover.
Though this assumes that they'll be patternists (wont mind being transferred to different hardware) and a lack of any strong time-preference (wont mind being archived for decades).
I don't think this is really engaging with what I said/should be a reply to my comment.
he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities
Ah, reading that, yeah this wouldn't be obvious to everyone.
But here's my view, which I'm fairly sure is also eliezer's view: If you do something that I credibly consider to be even more threatening than nuclear war (even if you don't think it is) (as another example: gain of function research), and you refuse to negotiate towards a compromise where you can do the thing in a non-threatening way, so I try to destroy the part of your infrastructure that you're using to do this, and then you respond to that by escalating to a nuclear exchange, then it is not accurate to say that it was me who caused the nuclear war.
Now, if you think I have a disingenuous reason to treat your activity as threatening even though I know it actually isn't (which is an accusation people often throw at openai, and it might be true in openai's case), that you tried to negotiate a safer alternative, but I refused that option, and that I was really essentially just demanding that you cede power, then you could go ahead and escalate to a nuclear exchange and it would be my fault.
But I've never seen anyone ever accuse, let alone argue competently, that Eliezer believes those things for disingenuous powerseeking reasons. (I think I've seen some tweets that implied that it was a grift for funding his institute, but I honestly don't know how a person believes that, but even if it were the case, I don't think Eliezer would consider funding MIRI to be worth nuclear war for him.)
Well it may interest you to know that the above link is about a novel negotiation training game that I released recently. Though I think it's still quite unpolished, it's likely to see further development. You should probably look at it.
There's value in talking about the non-parallels, but I don't think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?
I don't think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn't real reasoning, and generally shouldn't be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you're facing a situation that's pretty different from any of those stories. Analogical reasoning is when all you carry is a little bag of stories, and then when you need to make a decision, you fish out the story that most resembles the present, and decide as if that story is (somehow) happening exactly all over again.
There really are a lot of people in the real world who reason analogically. It's possible that Eliezer was partially writing for them, someone has to, but I don't think he wanted the lesswrong audience (who are ostensibly supposed to be studying good reasoning) to process it in that way.
Saw this on manifund. Very interested. Question, have you noticed any need for negotiation training here? I would expect some, because disagreements about the facts are usually a veiled proxy battle for disagreements about values, and,
I would expect it to be impossible to address the root cause of the disagreement without acknowledging the value difference, and even after agreeing about the facts, I'd expect people to keep disagreeing about actions or policies until a mutually agreeably fair compromise has been drawn up (the negotiation problem has been solved).
But you could say that agreeing about the facts is a pre-requisite to reaching a fair compromise. I believe this is true. Preference aggregation requires utility normalization which requires agreement about the outcome distribution. But how do we explain that to people in english?
I was also curious about this. All I can see is:
Males mature rapidly, and spend their time waiting and eating nearby vegetation and the nectar of flowers
They might be pollinators. I doubt the screwfly:bee ratio is high, but it's conceivable that there are some plants that only they pollinate? But not likely, as I'm guessing screwfly population probably fluctuates a lot, a plant would do better to not depend on them?
I see. I glossed it as the variant I considered to be more relevant to the firmi question, but on reflection I'm not totally sure the aestivation hypothesis is all that relevant to the firmi question either... (I expect that there is visible activity a civ could do prior to the cooling of the universe to either prepare for it or accelerate it).
I don't think atproto is really a well designed protocol
And the ecosystem currently has nothing to offer, not really
An aside, I looked at margin.at, which is doing the annotations everywhere thing. But it seems to have no moderation system, doesn't allow replies to annotations, doesn't even allow editing or deleting your annotations right now. Why is this being built as a separate system with its own half-baked comment component instead of embedding an existing high quality discussion system from elsewhere in the atmosphere? Because atproto isn't the kind of protocol that even aspires that level of composability and also because nothing in the ecosystem as it stands has a good discussion system.