CB

Cathy Bogaart

Marketing and Communications @ Cathy Bogaart Consulting
11 karmaJoined Working (15+ years)Seeking work
cathybogaartconsulting.ca

Bio

Participation
1

I help mission-driven organizations share their value in a way that resonates—with their communities, partners, and funders. Whether it’s through strategic communications, stakeholder engagement, or facilitation, I bring clarity and connection to complex environments.

Over the years, I’ve worked with organizations in research, advanced technology, and innovation—often in spaces where the audiences are diverse and the challenges are layered. My role is to make sure people not only understand the message, but also feel engaged, respected, and part of the solution.

How others can help me

I'm looking for ways to support AI safety and governance.

How I can help others

If you need marketing or communications strategy, or even just basic editing, I can help. I have a pretty good Toronto-based network in technology and social innovation if you'd like an introduction to someone. 

Posts
1

Sorted by New

Comments
5

I wanted to leave both the "changed my mind" and "made me laugh" emojis! :) 

Originally, I didn't like the idea that AIs should help everyone equally, including potentially terrorists or other bad actors. 

While that seems problematic, I guess that would avoid having to make moral judgements at all about people, which would likely have good outcomes, generally. For everyone.

What I get from this piece is something that keeps coming up: AI is "just" teaching us about ourselves. (In quotes because it's no small feat.) Which may just be my confirmation bias, but there are many signals here. And if that is true, does that mean that the answer to the threats from AI -- the pathway to AI safety and governance -- may have much to do with how we deal with human threats?

I recognize we don't program humans in the same way, but our culture DOES train us to think and act in certain ways. And certain factors do incentivize us to act outside of those norms. And we are all, essentially, black boxes in these advanced-computational brains.

If that's true, to take it further in a positive direction: Does that mean it will be EASIER to deal with the threats because it is faster to program the changes? With humans, it can take generations of cultural shift to change norms.

Lots of counter arguments I can think of. But just a curious thought to mull over.

I really appreciated this post — it made me stop and think about something I hadn’t spent much time on before. The question of what would need to be true for me to act made me pause because  I don’t have a clear answer. Other than voting (which obviously doesn’t change the situation elsewhere), I’m not sure what tangible, effective actions are available, or how to recognize early warning signs of something with such outsized opportunity for negative outcomes.

In my work in communications, I often think about how much impact conversations themselves can have — especially when they happen across divides. People are more likely to reconsider their views when they feel genuinely listened to, not argued with. So for me, part of “taking action” might mean practicing and teaching that kind of listening. Creating more understanding rather than more polarization.
It sounds small next to the scale of the problem, but open, honest dialogue feels like an early form of prevention — something that can keep space open for cooperation and empathy. When I (or we) talk about it at scale, that’s at least something I/we can do within the scope of my/our abilities. And that’s what I like about this forum: it gives us a place to discuss what might feel too charged or intimidating to bring up elsewhere.

I don’t have a grand strategy to add, but I did want to share this small perspective: that maybe the act of conversation itself is a worthwhile contribution.

Wow: pre-early bird already SOLD OUT 07/23?! Nice job, Vancouver! Gotta get my butt there!

So exciting to see so many NEW EA conferences (Nigeria, Cape Town, Bengaluru, and Toronto). I'm on EA Canada's EAGxToronto organizing team and I see how much work it is to organize these events. Looking forward to attending my first one Aug 16-18. Then maybe I will apply for Boston's EAG in Nov! Are there any concerns about nearby conferences cannibalizing each others' potential audiences? Is there a maximum number of conferences that we think will produce the most effective outcome? I'm interested to know how CEA thinks about these things.

This is helpful for every walk of life, so it should be no surprise to me that it's also part of being an effective altruist. And yet, it's not something I thought a lot about before I read this post. As an organizer for EAGxToronto (taking place in August) I'm wondering if you think a networking workshop based on these principles (and others) would be useful? It will be my FIRST EAG/x conference, so I'm interested to know what you think.