First, one should ask what non-elites can do to make great positive impact. What comes to mind is donating, learning about EA, developing solutions, and presenting them to their networks. In addition, I was thinking about doing in-network elites' work so that the privileged individuals can more fully focus on EA-related advocacy within their circles.
Why one would seek to refrain from approaching the public is that 1) reputational loss risk based on a public appeals to reject EA, 2) upskilling relatively large numbers of persons whose internal professionalism standards do not reflect those of global elites in time-effective communication norms requires specialized capacity investment and 3) sharing EA concepts in depth with a large number of individuals would constrain experts in the community.
There should be people who can (2) coach relevant professional communication while maintaining openness to an individual's expression and (3) people can be encouraged to engage with more senior people only after they extensively learn on their own and with peers, so EA should have the capacity to address these two concerns.
The remaining challenge in approaching the non-elite public is (1) minimizing reputational loss from public appeals to reject EA. This can be done by avoiding individuals who would be more likely to advocate against EA and developing narratives where such public rejections would benefit the community.
Thus, some relevant questions can cover opinions on the idea of continuous pro bono learning on how to benefit others to a greater extent, perspectives on preferred learning models, linking social media posting and EA-related learning motivations, and ads that would motivate respondents' peers to start learning. Then, the appropriate ads can be offered to low reputational loss risk and high participation potential audiences based on their social media activity.
In addition to gathering data on what advertisements would invite the right people to the community, I thought of gaining the determinants of persons' wellbeing in order to identify possible win-win solutions and conducting a network analysis to target nodes of influence that have the greatest wellbeing impact.
Similarly to my comment below, if you ask the public about AI ethics and risks, they can first think about themselves, even suppressing their reasoning due to fear. One should also not bore/deter people by 'what should AI do to be nice to your friends, even those who are not' but keep an understanding of prestige and importance in answering this question.
Thus, the question could be presented as a mental challenge that can inform policy regulating safe AI development, considering that the result would be superhumanly intelligent AI with extensive productive capacity and decisionmaking power and possibly understanding and being able to influence/motivate the needs of all living beings.
Then, one can think about, in the scenario of abundance and actually being able to enact good decisions, what would an intelligent entity seek to motivate so that the needs of all living beings are catered to. Maybe the needs being cooperation and skillfully increasing others wellbeing?