I'm not convinced this area is really neglected. For example, Internet-usage/-addiction has been recognized as a national healthcare issue in China since at least the end of the 1990s - early 2000s. Public policy measures seem to have been taken since a few years and increasingly severe screentime limitations have been introduced to limit screentime in minors. These policies are fairly new in their severity and I haven't really been able to find scientific studies to research the impact of these.
Sources:
Jiang, Q. (2022, September 15). Development and Effects of Internet Addiction in China. Oxford Research Encyclopedia of Communication. Retrieved 15 May. 2025, from https://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-1142.
https://www.technologyreview.com/2023/08/09/1077567/china-children-screen-time-regulation/
That's pretty cool. Are you aware, whether there are any good works towards tool-assisted LLMs? There are a huge number of databases with biological data (connectomes, phenotypes, variants), especially in the field of genetics, where individual analysis is somewhat bounded by human interpretation capacity. Potentially LLMs can already be used in a guided manner, that could greatly accelerate analysis.
On a cursory search, this seems to be fairly low hanging, and a number of preprints in different areas seem to exist, although I didn't really investigate them in detail:
https://pmc.ncbi.nlm.nih.gov/articles/PMC11071539/
https://www.nature.com/articles/s41698-025-00935-4
To me at least whole-genome interpretation seems currently somewhat bottlenecked by human interpretation capacity.
Are there any available works, that have attempted to build databases for cross-cause prioritization?
I believe it might be highly valuable to have a resource, that doesn't actually attempt to rank between cause-areas. This would essentially just provide a visualization of current conversion rates between individual causes.
That would require the availability of numbers associated with projects (QALYs, numbers of animals saved, etc). But would then at least allow one to e.g. directly compare charities without much manual work in the sense of are we potentially making a orders-of-magnitude tradeoff, that might not align with our own values?
At least in the sense of funding and spending, we already have implicit general value frameworks. Making these explicit, would at least be a good starting point for discussions.
In some discussions I had with people at EAG it was interesting to discover that there might be a significant lack of EA-aligned people in the hardware-space of AI, which seems to translate towards difficulties in getting industry contacts for co-development of hardware-level AI safety measures. To the degree to which there are EA members in these companies, it might make sense to create some kind of communication space to exchange ideas between people working on hardware AI safety with people at hardware-relevant companies (think Broadcomm, Samsung, Nvidia, GloFo, Tsmc etc). Unfortunately I feel that culturally these spaces (EEng/CE) are not very transmissible to EA-ideas and the boom in ML/AI has caused significant self-selection of people towards hotter topics.
I believe there might be significant benefit for accelerating realistic safety designs, if discussions can be moved into industry as fast as possible.