Rumination 2025: Year One
The first year at a brand new nonprofit is like laying the track in front of a train and the train keeps getting faster, but you’re getting REALLY good at laying track. When I joined Real Good AI in March, Mandy and Craig had already been doing the work of an entire organization wearing every single hat imaginable and inventing new ones along the way.
Coming from years in a highly structured federal government role the shift was jarring. I had to unlearn a lot, especially this instinct to defer to “how it’s always been done.” Day ones know how the RGAI comms and social media started off looking highkey government infographic, matter of fact, and well…boring as hell. This was, unfortunately, a cannon event. Everything changed when The Dr Mandy sat me down and said “I need you to respect institutions less…like A LOT less. There’s no blueprint for this and becoming just another nonprofit statistic isn’t an option.” Relearning the rules of the internet, trying to be kind, generous, and patient with information in an era of rage driven algorithms isn’t easy. But every post, video, and update is an opportunity to reach real people who are already living with the consequences of unchecked profit driven technology.
When I worked in the Federal Government, most of my job was returning unclaimed tax refunds to the families of people who died waiting for their claims to be processed. Each case number was a real person and a real loss. It was money that could have helped cover any type of expense, including end of life care. Unfortunately, the speed of government is slower than the speed of progress. AI isn’t going to wait for us to catch up. Its impacts are already here. It’s SUCH A RELIEF to see a problem and be part of an organization that focuses on finding the solutions RIGHT NOW—not when it’s already too late.
But year one at Real Good AI isn’t just about me. While my own experience was shaped by moving into the organization being built in real time, the questions underneath were shared: What did we have to unlearn? What surprised us? And how do you do technical work that stays grounded and human? To explore that the ruminations below come from a round table conversation with the team.
/start transcript
What did you expect Real Good AI to be in Year One? What did it become?
Mandy: I thought when creating Real Good AI, I knew what it was going to be, because, obviously, we had talked, Mark and I, on end about what real AI was, but it was amazing to see it evolve and become something that we could all be excited about over the year. So we started somewhere, and we ended up somewhere completely different. But when I look back at where we ended up it’s closer to where we meant to start, than when we actually started. It just feels good to have the mission set, well communicated, in certain aspects, and have this huge to-do list of actionable steps we can take in the new year to make a big difference in this super complicated AI space.
What surprised you most about working in a nonprofit environment?
Imène: This year was the first discovery of what it is like to do something that is not in academia, and having an immediate impact on people, not like a theory and research that’s not directly applicable. Seeing real other people, and nonprofits, and helping them, fundraising with them, looking at their numbers and trying to give them guidance. This is a huge amazing experience, you don't get to see the social and the people side when you are in academia. It was a great, great plus.
Craig: I've had a lot of fun learning about other nonprofits; the chance to talk to all sorts of different missions. Just being able to have all the conversations with researchers and learn, cause I don’t know a lot now, but I knew way less at the beginning of the year. It's been cool to not only learn about the current state of AI and machine learning, but also to hear all the past projects that the team has worked on and all the stuff that they're working on. It's very cool to work amongst so many really smart people. Just a fun environment where traditionally, I would not have all these conversations with scientific researchers, to learn from them and ask them questions.
What has it been like to do technical work through a people‑first lens?
Mayleen: I think maybe my favorite thing of just this whole experience has been finding and working with other people who are very passionate about this mission. Not just the AI stuff, but the putting people over profit and really being driven by good things and by what helps people and what betters this world and humanity versus being driven by profit, I think that's really exciting. I know that in general nonprofit spaces that's kind of the idea, but I think getting to do that as a scientist, as a researcher, doing math stuff. Being in a space where, because I'm in applied math, I'm surrounded by people doing LLMs, AI, stuff all the time. But it's not always necessarily through the lens that Real Good AI comes at it, so I think it's just really great to be a part of that.
Min: For me, I think the neatest thing about this experience has been getting to talk to and, to some extent, work alongside people who are doing work and pursuing problems because they genuinely think that it's the right thing to do in the right way to make the world a better place which is the mandate of science. As opposed to pursuing problems, through a lens that is influenced by either profit motive or, you know, the whim of some executives. So it’s a breath of fresh air.
Is there a realization that you guys had while doing this, that there is a bigger gap between expert knowledge and the knowledge of the public?
Imène: I would say the gap is that we are not talking about the same thing.
Like, AI, for the people, it's LLMs, whereas, as a scientist, AI, when we think about it, it's like the models, the statistics, the machine learning things. It's kind of reducing what people in general, and the general public, would think about AI, and they only see it as one area of application, and we see it as a different area of application. So, bridging this gap and also broadening their view, showing it as a, like, a great tool for science, but also, like, you know, use with caution. What they are claiming for the general public is not realistic or real scientifically. It's a big thing that, like, we're still working on that actually.
Mandy: Well, like, they [the public] know stuff, I was surprised.
The thing I'm most surprised about is, well, I guess, two things. One, how hungry for the technical things they are. So me giving the GP lecture, I could not believe that was one of our most attended streams, that everyone was, like, “yes, tell us the intimate details of how to fit a Gaussian process model.” Like, who would have expected that!? Second, every time I talk to someone about AI, because it's so huge right now, they know stuff that I don't know, right? They are aware of the news, they always have some new application I've never thought of, that they ran into in their life. And so the knowledge kind of goes both ways. It's better we've built this bridge instead of staying in the ivory tower, because it goes both ways, and I think both sides are really excited.
/end transcript
Listening back to that conversation, what stood out was how much we learned in our first year and how deliberately the team resisted easy narratives; about AI, about expertise, about speed. Our focus remained on clarity, accountability, and human judgment. There is real tension between what technology promises and what people actually need and AI will not wait for institutions to catch up. The work is making sure people aren’t left behind while it moves.