Thought it would be worth sharing here. No doubt, Vitalik will assemble a fantastic team, but still might be worth spreading the word to the best bio risk and covid researchers to reach out and help finding the most underfunded/impactful initiatives. 

Vitalik's announcement tweets: https://twitter.com/VitalikButerin/status/1487073874193702916

1.  CryptoRelief sending $100m of the $SHIBA funds back to me. I plan to personally deploy these funds with the help of science advisors to complement CryptoRelief's existing excellent work with some higher-risk higher-reward covid science and relief projects worldwide.

2. I've cofounded a new org (Balvi) to direct these funds, which is in a better position to make these bets which will are very-high-value and global in nature and bring great benefit to Indians and non-Indians. You can follow funds at this address: https://etherscan.io/address/0xa06c2b67e7435ce25a5969e49983ec3304d8e787

3. Projects will include vaccine R&D, innovative approaches to air filtration and ventilation, testing and much more. More info coming soon!

4. You can also follow along CryptoRelief's existing and future excellent work at their transparency page: https://cryptorelief.in/transparency

Question: is there specific great research, or efforts that come to mind, that might be non-obvious to fund (beyond fast grants covid research funding, igem etc.)

31

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

There is a pretty good multi-billion-dollar program for biodefense that the Biden administration is pushing for.  You could try to support those goals either by contributing to Guarding Against Pandemics, who are lobbying to make sure that said plan actually happens.  Or you could try to invest in some more speculative pandemic-prevention technology that isn't covered by the government spending plan; this 80K interview talks about the potential to create broad-spectrum tests for many different infectious diseases and doing metagenomic sequencing to identify new viruses.

Here are some posts describing key EA priorities in biosecurity.  Not all of these are relevant to immediate covid relief (creating sealed underground bunkers to guard against X-risk is not going to help anyone right now), but some of them are (like investigating UV sterilization technology and designing better forms of PPE):

  • A list of Concrete Biosecurity Projects from Andrew Snyder-Beattie, who is in charge of biosecurity at OpenPhil.  It goes into detail on:
    • Creating a pandemic early-warning center specifically focused on detecting new, unknown pathogens.
    • Designing improved PPE and then convincing the government/military to buy a bunch of them.
    • Research into broad-spectrum medical countermeasures (like the pan-coronavirus vaccines that some labs are currently working on) and rapid-response platforms.
    • Strengthening the Biological Weapons Convention in creative ways, like with a whistleblowing prize. (There's kind of a running EA joke about the pathetically small budget of the Biological Weapons Convention part of the U.N.; they only have something like four employees.)
    • Fundamental science research into improved sterilization technologies along the lines of UV light, antiseptic materials, etc.
    • Specially designed underground bunkers.
  • Here is a Forum article about biosecurity projects for engineers and materials scientists.  Will Bradshaw is interested in chatting to anyone this stuff in more depth and I'm sure he'd love to talk with you if you wanted.
  • Here are a bunch of brainstormed bullet points about small projects that might be helpful, including lots of smaller stuff on the level of research that a single graduate student or small team could be funded to produce.
  • Of course you could peruse the past grants of OpenPhil's biosecurity program.

Some other things that come to mind:

  • Maybe some kind of advocacy effort to implement stronger bans on gain-of-function research, or even just collecting and presenting data about BSL 3 & 4 labs, where they are, their history of leaks/accidents, etc, along the lines of how Covid Tracking Project, OurWorldInData, and other organizations will compile datasets as a form of journalism and advocacy.
  • Here is a List of Possible FDA Reforms that might be worth trying to push for, either in the USA or among the health regulators of other nations which are probably in similar situations.  1DaySooner would probably be a great place to start.

I'm not sure if these ideas satisfy the "non-obvious to fund" specification in your question, but hopefully something here has been helpful!   Sorry that these suggestions have been so heavy on advocacy rather than science; I am not a biologist or anything so I don't have a good picture of that space.

Sounds great! Could toss some more money to 1Day Sooner to work more on advocacy for human challenge trials?

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
 ·  · 1m read
 · 
This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on. Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are: * Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers. * A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk. * Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem. * An overview, in section 6, of what we can do, today, to prepare for this range of challenges.  Here’s the abstract: > AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.  > > These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. > > We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are alig