Hide table of contents

I wanted to share an exciting funding opportunity by the Bill & Melinda Gates Foundation for projects that leverage Artificial Intelligence (AI), with a focus on low- and middle-income countries (LMICs). This funding opportunity aims to harness the power of Large Language Models (LLMs), including ChatGPT-4, to address challenges and generate evidence in various sectors.

Applications for this funding opportunity opened just last week and will close on June 5, 2023. Given the short timeline, there is a high likelihood of limited competition, presenting an excellent chance for smaller, scrappier organizations to secure funding.

Key Details:

  • The foundation encourages proposals led by investigators based in LMICs.
  • Projects should demonstrate clear applications of LLMs, engage relevant stakeholders, exhibit scalability potential, and emphasize responsible use and sustainability.
  • Funding grants of up to $100,000 per project are available, with a total budget allocation of up to $3,000,000.
  • The duration of each project will be three months, offering an opportunity to execute impactful initiatives efficiently.
  • With the available funding, the foundation has the potential to support up to 30 projects!

I think the incorporation of ChatGPT-4 is of particular significance since Microsoft invested $10 billion into ChatGPT.

Thanks and Shameless Plug: 

I first heard about this funding opportunity thanks to Cameron King from Animal Advocacy Africa through the Impactful Animal Advocacy slack group, which you can join here. It's been a great platform for innovative/cross-discipline/international collaboration in all areas of animal advocacy and I highly recommend joining if this sounds appealing to you. 
 

Example Project Ideas (from ChatGPT4) to Kickstart Creative Thinking: 

  • AI-Driven Market Analysis: A significant AI grant could enable organizations coordinating cage-free egg campaigns to leverage advanced machine learning algorithms for market analysis. By analyzing vast amounts of data, including consumer preferences, purchasing patterns, and industry trends, AI could provide valuable insights into target demographics, identify potential barriers, and help develop effective messaging strategies to promote the adoption of cage-free eggs.
  • LLM-Enhanced Supply Chain Optimization: With LLM-powered natural language understanding capabilities, organizations could leverage AI to analyze textual data from supply chain networks, including supplier contracts, transportation logistics, and inventory management systems. This AI-driven analysis would enable optimized decision-making, improved coordination, and more efficient supply chain management (say for cage-free eggs or malaria bednets), ensuring their availability and accessibility to interested parties.
    LLM-Assisted Policy Advocacy: With the power of LLMs, organizations can analyze vast amounts of legislative documents, policy reports, and public discourse related to certain subjects. By employing AI techniques like topic modeling and sentiment analysis, organizations could identify key policy influencers, track public sentiment, and develop evidence-based arguments to advocate for policies that promote human and animal well being.
  • AI-Enhanced Financial Inclusion Solutions: Leverage LLM capabilities to develop AI-driven tools that facilitate financial access and empower underserved populations in LMICs to manage their finances effectively.
  • AI-Enabled Data Analytics for Impact Evaluation: A significant AI grant could support the implementation of advanced data analytics tools and techniques by NGOs. They could leverage AI algorithms to analyze large datasets, including epidemiological data, to gain insights into the impact of their interventions. This would enable them to assess the effectiveness of different strategies, identify areas for improvement, and make data-driven decisions to optimize their efforts.
  • AI-Assisted Early Warning Systems: The Against Malaria Foundation could develop AI-based early warning systems to detect potential malaria outbreaks in real-time. By utilizing machine learning algorithms and integrating data from various sources such as weather patterns, mosquito population dynamics, and epidemiological data, they could create predictive models that alert authorities and communities about impending risks. This would facilitate proactive measures, such as intensifying vector control activities and improving healthcare preparedness, to mitigate the impact of malaria outbreaks.
  • AI-Driven Decision Support Tools: With the help of an AI grant, NGOs could develop decision support tools that assist policymakers and health professionals in making evidence-based decisions. By integrating AI capabilities into data visualization platforms and creating user-friendly interfaces, they could provide accessible insights and recommendations regarding resource allocation, intervention strategies, and long-term planning to empower stakeholders to make informed choices and optimize their efforts.
     

A Final Note:
As the funding landscape in the effective altruism world continues to shift towards AI alignment/safety, it becomes increasingly important for global health and animal welfare charities to explore alternative funding sources. This opportunity from the Gates Foundation can serve as a valuable step towards diversifying funding streams and supporting impactful projects in LMICs.

It's time for neartermism to get back on the funding dating scene
Comments4


Sorted by Click to highlight new comments since:

Is it possible to edit the title so it says "Deadline June 5th" instead of "Deadline 6/5" as many people who could potentially be interested in this might look at the title and think the deadline was the 6th of May and thus scroll past the post? Most of the world uses the DD/MM/YY (sometimes also YY/MM/DD) format, so I would imagine that this small change could help a lot and attract more potential applicants.

This seems like a really fantastic opportunity and it would be a great pity if some were to ignore it who otherwise would have been interested in applying simply because they misinterpreted the deadline as being the 6th of May instead of the 5th of June.

The edit has been made! Thank you for helping me overcome my Americo-centric framework of the world. :)

Thanks Constance! 

Let me know if you're looking to apply and want to do something together. I have a few ideas around this area as someone that has worked in Sub Saharan Africa for 10+ years now, and actually wrote a small post around this subject ((1) Large Language Models for Development: Why Information Matters (thegpi.org)
 

Arno,

Thanks for your engagement and your past writing on LLM's and LDC's. I was not personally looking to apply, but if I have the right partner I would consider it. I have many thoughts about this topic in general too and would be happy to chat. I'll dm you my calendly.

I'm already getting a AI use-case brainstorming session together for animal advocates. Perhaps this can be done for global health/development as well. I recently went to a webinar by deeplearning.ai that demonstrated how to train 2 different LLM's in under 1 hour using a highly efficient tech stack. I think that the problem with outdated info that you mentioned in your blog post can easily be overcome by training up a targeted LLM with up to date information and then assessing it against benchmark data. 

If you are interested, here is the full webinar: Building with Instruction-Tuned LLMs: A Step-by-Step Guide by Deep Learning AI

And here is a summary of the webinar I made using the following tech stack:
otter.ai speech to text (STT) tool for transcribing --> GPT3.5 for summarizing the large amount of transcribed text due to it's larger context window --> Google docs for finding and replacing typos for transcribing errors like QLoRA --> GPT 4 for summarizing on a more advanced level

~~start of AI content

The video demonstrated the process of building and fine-tuning two large language models (LLMs). It highlighted the importance of instruction tuning, which aligns the model with human expectations in terms of bias, truthfulness, toxicity, etc., and fine-tuning, which refines the model for specific tasks.Several tools and methods were mentioned for the fine-tuning process:

  • Dolly 15k, a dataset with 15,000 high-quality human-generated prompt-response pairs.
  • Open Llama, a commercial language model that can be fine-tuned.
  • QLoRA, a fine-tuning method that reduces the model's complexity, and is useful for tasks with lower dimensions.
  • The supervised biometric tuning trader library, a tool that facilitates the fine-tuning process.
  • They also talked about the use of quantization, which reduces the size of weight matrices, optimizing computing resources. This is particularly useful when using limited resources such as Google Colab, which was mentioned as a viable platform for training these models.
  • Two methods of fine-tuning were discussed: supervised and unsupervised. Supervised fine-tuning involves using clearly labeled instructions to train the model, while unsupervised fine-tuning allows the model to learn without specific targets or labels. Both methods have their advantages and drawbacks: supervised fine-tuning requires more time to organize the dataset, while unsupervised fine-tuning can be done faster.
  • The presenters demonstrated the process of fine-tuning using both real and synthetic data. Synthetic data, generated by GPT-4, was used to demonstrate the process of fine-tuning a model for generating marketing emails.
  • The webinar concluded with the reminder to continuously monitor metrics and evaluate the performance of the models for specific tasks, emphasizing that building LLMs can be done by anyone without needing vast computational resources, especially with tools like QLoRA. They provided a GitHub repo for resources and examples for prompt engineering and fine-tuning.

This instructional video demonstrated the value of building and fine-tuning Large Language Models, and how this can be achieved even with limited resources. It provides a comprehensive guide on how to approach this complex task, and offers insights on optimizing performance and efficiency.

~~end of AI content

Please note that I have no tech background whatsoever and only recently started seriously diving into AI 1 month ago so any errors in phrasing or concepts is a result of me still coming up on the learning curve. If anyone has any corrections to the stuff I said here, PLEASE let me know!

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
Recent opportunities in Biosecurity
10