Dr Min Priest Dr Min Priest

A Brief History of AI

Our relationship with Artificial Intelligence has a long history, much longer than one might think based only upon recent reporting. In the broadest possible terms, AI’s roots stem from myths and legends of non-human intelligence found in cultures all over the world. Many of these cultural memories influenced early scientists who, in parallel traditions, developed formal mathematics and automata, as well as several early analog “thinking machines”. These devices, some developed over a thousand years ago, may have been rudimentary but operated on the same principles of modern computing. Accordingly, computing developed from these same roots, and indeed the distinction between AI and computing is a relatively recent phenomenon. The modern concept of AI, and indeed the English term “Artificial Intelligence” itself, is typically credited to the 1956 Dartmouth College Summer AI conference. This conference sought to formalize a discipline to eventually create an artificial equivalent of a human mind, capable of solving complex problems across many domains. We now call this highly general type of AI Artificial General Intelligence, or AGI.

However, the subsequent trajectory of AI research has been highly non-linear, experiencing several periods of growth followed by so-called “AI Winters” where science, investment, and regulatory confidence in the technology waned for a time. These setbacks are largely attributable to actors’ over-promising the current or expected near-term capabilities of AI followed by the actual capabilities falling short of these expectations. 1950s-1970s AI models were effectively toys — interesting proofs of concept but incapable of solving practical problems, which eventually resulted in a large decrease of funding for their development. In the 1980s, expert systems focused on decision support for highly specific business and logistics problems achieved early success and became widely popularized. This prompted a refocus of most AI research onto tools tailored for specific problems instead of AGI. The early success of these systems led to their over-adoption and an eventual decline in confidence as over-leveraged companies began to fail in the early 1990s, prompting the second AI winter.

It is worth noting that the first two AI winters were caused by different entities. The first was the result of a loss of US government confidence, while the second was the result of industry over-investment creating a bubble. As we shall see, the ebb and flow of AI became much more complex after the turn of the century, and although several individual technologies have risen and fallen, we have not yet seen a third AI winter and the mass loss of confidence that it entails.

By the mid 2000s the large increase in compute power, storage capacity, and the popularization of the internet led to the success of several AI technologies. However, lingering public and business distrust of the AI label caused scientists to use alternative category names for their technologies such as data mining, machine learning, data science and “big data”. Like their expert system forebears, the majority of these technologies were narrowly focused on particular problems. This expansion of compute capacity also led to the prominence of Artificial Neural Networks (ANNs), a branch of AI that had existed for over 50 years but only became practical by the early 2010s. The surprising generalization ability of ANNs and their broad applicability to diverse problems such as natural language, computer vision, and audio processing eventually led to a renewed interest in pursuing Artificial General Intelligence. Perhaps, many thought, it was finally possible to build tools that are broadly applicable rather than purpose-built for a narrow collection of problems.

In the mid-2010s there was a wide belief that AI technologies of the time were ready to solve major problems. Perhaps most famously, business leaders promised that fully autonomous vehicles were on the cusp of becoming commonplace. The intervening years have shown that, while some viable assistive technologies for driving achieved widespread adoption, fully autonomous driving was not yet practical and is still an unsolved problem in 2025. In prior decades, these conditions might have created another AI winter as investors abandoned the technology. However, it seems that the field has penetrated enough of the academic and business world that high-profile shortfalls are no longer sufficient to cause a full-blown AI winter. Instead there appears to be a cycle of popularity churn as new models rise to replace those that fall short, capturing the attention span of the industry before it can form a judgement of the whole field. For autonomous vehicle routing this has meant that, by mid 2025, there has been enough sustained investment that some businesses have deployed limited autonomous vehicle services to moderate success.

As this autonomous vehicle routing drama played out, a different branch of AI technologies was steadily growing to an unprecedented boom. Indeed, our current moment in AI’s history is overwhelmingly focused upon generative models - tools that are able to produce a human-comprehensible response to (usually) natural language prompts provided by a non-expert user. For example, Large Language Models (LLMs) are sophisticated neural networks that can produce readable, if not always accurate, plain language responses to queries on many topics. Similarly, diffusion and related models can produce visual images or even video that attempt to realize a text-based description. LLMs are particularly in vogue due to their perceived ability to augment or even replace a lot of human knowledge work. Although there is a lot of hype that LLMs have or will shortly achieve artificial general intelligence, it is important to learn from the history of AI and not overestimate their abilities.

Unfortunately, in mid-2025 it is clear that many business and government leaders believe that LLMs and related AI systems will reduce or eliminate much of the need for human workforce in several sectors, a phenomenon that is most visible in the arts but is also present in point-of-service (e.g., customer service chatbots), IT infrastructure, point-of-sale, as well as text production sectors such as software development and professional writing. Furthermore, stakeholders believe so strongly in the future dominance of these technologies that layoffs, hiring freezes, and a widespread reluctance to hire early career staff in many fields has become commonplace. 

What can we learn from the history of AI, and how can its lessons help us navigate our present? It is important to keep in mind that LLMs and diffusion models are only a subset of AI and are not the only route towards solving the problems that AI hopes to solve. Furthermore, AI is not, and likely never will be, a drop-in replacement for a human mind. We are beginning to see some recognition that the current state of the technology is not AGI as advertised. Research is beginning to suggest that LLM use not only does not drastically improve productivity but may in fact reduce critical thinking and problem solving abilities. Moreover, pop science publications and commentators have begun to notice that prominent AI companies are consistently promising exponential improvement six months in the future, promises upon which they never seem to deliver. The purpose of this article is not to predict an incoming AI winter — indeed, the global economy has invested so heavily in AI that such a thing would cause a devastating recession that the ultra wealthy will fight to avoid. It is instead a reminder that we’ve been in similar situations before, where a technology with interesting applications has been oversold by a profit-seeking investor class. This status quo is unlikely to persist, and eventually new technologies and ideas will arise to supplant those that are currently in vogue. 

Read More
AJ Sandhu AJ Sandhu

Catching You Up + A Sneak Peek 👀

We’ve been busy behind the scenes, and we’re reissuing this newsletter to make sure our new audience (hi, that’s you!) is fully caught up.

A few highlights to check out:

  • New FAQ Page: We’ve pulled together answers to some of the biggest questions we’ve been getting lately.

  • New Art Page: A dedicated spot to appreciate the A in STEAM

  • New Volunteer Page: Interested in volunteering? Fill out the new questionnaire to join our newly forming working groups

  • Coloring Book Gallery: Launching next week! Think of this as a teaser… more details soon.

We’ll have more updates coming, but for now, dive in and explore! Let us know what you think.

Thanks for being here!
-The Real Good Team

Read More
AJ Sandhu AJ Sandhu

Logo Glow-Up

Same Mission, More Lumens!

Hey everyone! Brilliant news around here. We’ve given our logo a fresh new look! Get ready to see our iconic light bulb shining brighter and bolder than ever.
For months that trusty hand-drawn bulb has been our beacon. It held so much meaning, symbolizing the initial sparks that illuminated our mission. We loved its charm and the story it told about our scrappy, passionate beginnings. But, like any organization growing and reaching further, we realized we needed to update with the times.
Trying to squeeze that lovely sketch onto a tiny mobile app icon? *Cue the squinting.* Wanting it to look crisp on a giant event banner? *Challenge accepted (sort of!).* Most importantly, dreaming of ways to visually celebrate the incredible partners who fuel our work? Our old logo just wasn’t quite flexible enough for the dynamic, collaborative future we’re building.
So, we did something exciting! We teamed up with a graphic designer to re-imagine our light bulb for this next chapter. Ta-da! Meet the new face (well, bulb) of Real Good AI!

What’s new and shiny:

  1. Versatility! This isn't just a pretty picture; it's a vector-based design that scales. From the tiniest social media profile pic to the grandest billboard, without losing luminosity! Crisp, clean, and ready for anything!

  2. Magic Inside the Bulb: This is the part we’re really excited about. Our director Amanda gave the designer a brilliant twist: make the interior a canvas! We can now seamlessly integrate custom graphics inside the bulb itself. Imagine seeing the vibrant colors of a key community partner, the distinctive icon of a funding collaborator, or even imagery representing a specific program we’re running together, glowing right there within our logo!

It’s a fantastic, visual way to showcase the scientific research and partnerships that make our shared impact possible. A light powered by many connections!
While the look is polished and more adaptable, the heart and soul remain unchanged. This is still our light bulb, representing the same core mission to Illuminate AI’s black box. We’re thrilled to unveil this new look and can’t wait for you to see it pop up everywhere! Get ready to see our light shine brighter, clearer, and more collaboratively than ever before. Here’s to illuminating the path ahead, together! Do you have any ideas about what we can fill our light-bulb with?

Read More
AJ Sandhu AJ Sandhu

Real Good Board

Meet the Humans of Real Good AI's Board of Directors

As promised we’re thrilled to introduce the charter Board of Directors for Real Good AI. Remarkable individuals sharing a vision of making AI work for everyone. Their diverse backgrounds and shared commitment to doing Real Good will help us navigate this new world of artificial intelligence with wisdom and heart.

Robert "Bob" Muyskens: Board President

Bob brings a unique blend of legal expertise and digital creativity to Real Good AI. With degrees in law from Campbell University and Organizational Leadership from the University of Cincinnati, he took an unconventional path into content creation. Starting in 2012 with friends under the username "Muyskerm," Bob turned his passion into a full-time career by 2017 AND graduated law school! But what really drives him is making a difference. Since 2014, he and his wife Mandy have channeled their community's generosity to raise over $150,000 for various nonprofits. As board president, Bob ensures that every AI innovation we pursue stays true to our mission of doing real good in the world and, also like, makes sure we follow the rules.

Mark "Markiplier" Fischbach: Board Vice President

Mark is a digital creator, filmmaker, and philanthropist who's built a global community of over 37 million people. Best known for his YouTube channel and interactive projects like the Emmy-nominated In Space with Markiplier, Mark believes in using technology to bring people together.

He has leveraged his platform's reach into raising millions for causes close to his heart. This work earned him the Cancer Research Institute's Oliver R. Grace Award, having raised more than $2,000,000 for various causes. Mark’s perspective on how technology can create genuine human connections helps guide the online community aspect of our organization.

Diane Muyskens: Board Treasurer

After 37+ years navigating the ever-changing world of technology, Diane retired from IT leadership at JP Morgan Chase ready to apply her expertise where it matters most. With a Computer Science degree and experience spanning multiple industries and technologies, she's seen firsthand how innovation and proper management can transform organizations. Now, she channels that knowledge toward helping nonprofits harness AI's potential to better serve their communities. Diane's blend of technical expertise and genuine commitment to community support makes her the perfect guardian of Real Good AI's resources and mission.

David Bell: Board Member of Note

David's four-decade career in education has touched countless lives, 32 years shaping young minds at Winton Woods High, 3 years at Dayton Belmont High, and 6 years in higher education at Miami University. As a teacher, coach, author, consultant, and President of the Ohio Choral Directors Association, David earned the prestigious 2008 Virtuoso Award for his contributions to music education. He is the founding artistic director of Sing Cincinnati!, putting local arts on the national stage. Under his direction, choirs have performed in Beijing's Forbidden City, recorded with the Cincinnati Pops, and even won Gold at the 2012 World Choir Games. His students have gone on to earn Kennedy Center, Emmy, Grammy, and Academy Awards. David brings his passion for community building and making connections to Real Good AI, helping us demystify technology and ensure it serves everyone.

Under the stewardship of this incredible team, we're ready to tackle the big questions: How can AI truly serve communities? How do we keep humans at the center of technological progress? And most importantly, how do we ensure AI is developed and used ethically?

Please join us in welcoming our board to the Real Good Community! Their experience is impressive. We hope to learn as much as we can to better our organization and better support our community. You can learn more about the folks leading the way here: realgoodai.org/board-of-directors

Read More
AJ Sandhu AJ Sandhu

Real Good Thursdays: Data Science Meets Community

Two Months of Bridging Tech, Arts, and Social Impact

Two months ago, we hit affiliate status on Twitch! 🎉 What started as our little "town square" experiment has been just as fun as we imagined. We still game (those zombies in our Tavern aren't going to fight themselves), but we're also hosting some fascinating conversations. We were ready for the internet to bring their worst, and we fully expected negative push back about our AI focus. However, the response has been overwhelmingly positive. Everyone from voice actors, scientists, nonprofit leaders, and other professionals have chatted with us about various aspects of AI. And guess what? We agree.

Our Recent Guest Lineup Has Been 🤌:

Tim Friedlander and Matthew Parham from NAVA brought the voice acting perspective on ethical AI and protecting human talent. We learned so much from them, and found out they wanted to know more about the science too!
Animation Writer Tristan Bellawala joined us for a real talk about AI and creativity. Explaining the ways it can be a tool for artists while keeping it real about their very real concerns.
Associate Professor Dr. Yaniv Brandvain demonstrated AI’s place as a supplement to teaching in an academic institution to improve student engagement.
Epic Games Store Lead Jimmy Chi dropped by to chat about gaming for good and how tech companies and nonprofits can team up to make magic happen.
Our friends from Doorways for Uganda shared their incredible work building connections across continents and reminded us about broadening our perspectives.

"Social Good" isn't just one thing. It's protecting creative jobs, fighting for fair work practices, making education accessible, AND occasionally debating the best zombie-fighting strategies or hitting sweet kick-flips like Tony Hawk. We're building a community where everyone, partners, volunteers, random internet strangers, can find their place.

Join us at twitch.tv/realgoodai

See you in the chat! 🎮✨

 
 
Read More
AJ Sandhu AJ Sandhu

Partnership Charts Course For Real Good Future

Neuroscientist Champions Responsible Innovation to Protect Children While Embracing Technology's Promise

Last month we had a livestream interview with Dr. Mathilde Cerioli, chief scientist at Everyone.AI. She joined Real Good AI to share initiatives that ensure artificial intelligence enhances, rather than hinders, child development.

Everyone.AI shares Real Good AI’s passion for positively shaping the future of AI. However, both organizations have a different skillset. While we have the machine learning and technical AI expertise, Everyone.AI has Neuroscientist and psychologist experience studying childhood brain development with AI. This makes our partnership very advantageous for both organizations. We're hoping to collaborate later this year on the global stage at the Paris Peace Forum in October to help show world leaders just how urgent and essential AI safety is right now. Either way, we are happy for the possibility of the great research we can do together. 

The conversation brought leading minds in neuroscience and ethical AI development together to talk about how thoughtful innovation and smart safeguards can unlock AI's potential while protecting our most vulnerable. All while playing Power Wash Simulator.

Everyone.AI's international coalition, launched with the Paris Peace Forum this year, shows impressive collaboration between governments, tech companies, and child development experts. With over 12 governments and some of the biggest tech companies, including Google, OpenAI, and Anthropic, already participating, this initiative shows commitment to getting it right this time.

I do see a lot of individual motivation to do better,” Dr. Cerioli observed, noting many tech professionals are determined to learn from social media’s missteps.
— Dr. Cerioli observed, noting many tech professionals are determined to learn from social media's missteps.

Real Good AI's partnership with Everyone.AI is part of this collaborative approach. Our commitment extends beyond words. Board President, Robert “Bob” Muyskens, as 'Muyskerm,' recently raised $7,005 in a charity stream for Everyone.AI, demonstrating how the community is already taking action.

Instead of treating AI as education's enemy, Dr. Cerioli hopes for integration that actually improves learning: "I think there's also an important role for education... What is a good prompt? Is that prompt helping you challenge yourself, learn more, develop your reasoning, or is it doing it for you?"

She stressed that AI can be a powerful educational partner when students understand both what it can and can't do, preparing them for a future where AI literacy is non-negotiable.

When addressing fears about AI stealing jobs, the discussion spotlighted human strengths that AI cannot match.

The key element of it is creativity. The AI is not able to mimic human creativity.
— Real Good AI's Dr. Imène Gourmiri
Do what you enjoy, because anyway you don’t know what’s safe in 10 years. So you might as well... if you pick something you actually really like, you’re going to be motivated to do it and not offload that cognitive load to an AI because you will enjoy doing it.
— Dr. Cerioli's advice for young professionals

The conversation provided practical strategies for families and educators:

  • Age-appropriate introduction: No unstructured generative AI before age 13, with guided exploration after (Sorry Mattel!)

  • Process over product: Value the learning journey, not just results

  • Open dialogue: Regular family and social conversations about AI use and impact

  • Skill development: Teaching critical evaluation of AI outputs

  • Meaningful engagement: Using AI to enhance human connections without replacing the HUMAN in humanity

Dr. Cerioli's wellbeing framework draws from positive psychology's six pillars: positive emotions, engagement, relationships, meaningfulness, accomplishment, and vitality. "Is my use of AI helping me with some of those or is it actually taking away?" she asked.

Everyone.AI's open letter is an invitation to shape a future where technology serves our highest aspirations. Real Good AI has already signed on, leading by example.

"It's going to change our environment in the world. Do we all agree that maybe we have to think a bit, how it will impact children and could we do it in a way that's responsible?"

The message is that through thoughtful collaboration, informed choices, and ethical innovation, we can create an AI-enhanced future that amplifies human potential.

The conversation between Real Good AI and Everyone.AI shows that when technologists, scientists, and child advocates unite, we can build a digital future worthy of our children's dreams. To sign the open letter, visit https://everyone.ai/open-letter/.

Read More
Mayleen Cortez-Rodriguez Mayleen Cortez-Rodriguez

Helping Nonprofits Peek into the Future

A Story About Outlier Detection

An important part of our mission is to support nonprofit organizations with real good data science. This summer, I’ve been working on a revenue prediction tool for nonprofits. The goal? To provide a free and simple-to-use tool that helps nonprofits plan how much revenue they should expect in the next year or two. I’ve been working with the National Center for Charity Statistics (NCCS) CORE Series dataset, which includes financial information from over 1 million U.S. nonprofits over the span of 3 decades based on publicly available IRS data (990 form). Working with big data is not easy because real-world data is messy. It comes from multiple sources, has missing information, can contain errors and unexpected features. The NCCS is an organization focused on maintaining this dataset and did a lot of heavy lifting (for which I am very, very grateful) by putting all of the relevant data into one place with detailed documentation. Even so, it took about 2 weeks to tidy up the data and make it usable for analysis. This process left us with a huge dataset containing more than 8 millions records from over 700,000 nonprofits across 33 years. 

With clean data in hand, we turn to modeling! Our goal is to learn patterns in the data well-enough to be able to predict, with some measure of confidence, the future (i.e. next year’s revenue). Think of it like learning a morning routine: if I know that you go to Starbucks on your way to work almost every day for your morning pick-me-up, I might be able to guess where you will be at 8am next Tuesday. Depending on how strong your routine is, I can guess confidently and will most likely be correct. But what if something unexpected happens, like your kid wakes up sick and it's too late to find a babysitter? No Starbucks for you this morning :( Or in the case of nonprofits, the COVID-19 pandemic hits the U.S. leading to unprecedented shutdowns and disruption, or a wealthy donor makes a large one-time donation to your organization. This causes a break in the pattern, something I didn’t anticipate. These types of unexpected, extreme deviations from the pattern—outliers—can confuse a model and lead to inaccurate predictions. So what do we do? Something cool about us humans is how we can look at something like Figure 1 and quickly tell that the left plot contains an outlier in 2010 and the right plot contains no outliers. But going manually through these plots for 700,000 organizations is infeasible… assuming it takes a human 30 seconds to look at a plot, identify outliers and record them, it would take me three years, assuming 40 hour work weeks the whole time. Yikes–ain’t nobody got time for that!

This is where the math modeling comes in to save the day! The idea is still what we as humans do: look at the overall pattern and look for anything that veers too far off, only in this case the process is not visual and instead relies on some cool math! We used Gaussian Processes, a type of model that comes with built-in tools for determining how confident you can be in the model’s predictions that can flexibly fit any shape an organization throws at it. To get the best results, we combined this with some model optimization. We tried some classic optimization methods like grid search (ooooo), where you make a bunch of guesses in the hopes one is good enough, and some fancier methods like Bayesian Optimization (ahhhh), where you actually use a different model to help you find a better model. Don’t worry, I didn’t know what Bayesian Optimization was either until I had to do it! We got decent results with gradient descent, a classic and popular optimization method where you use extra (gradient) information about a point to find a good model, but it was still too slow: 20 seconds per org, or 5 months of nonstop computation. Better than 3 years, but still impractical. With a few more tweaks, we found we could group similar organizations and fit them together, which got down to a computation time of a tenth of a second per organization… less than 24 hours of computation time! From three years, to 5 months, to 1 day–that’s the power of math and technology to increase efficiency. Imagine if we were able to speed everything up that much! See for yourself below!

Overall, our strategy identified 218,885 records as outliers (remember, our original dataset has over 8 million records). You can see the proportion of records each year identified as outliers (positive and negative) in the plot below. Most years, less than 4% of records are identified as outliers, but in 2020 and 2021 those numbers jump to over 6% (surprise, surprise).

Even though outlier analysis isn’t the main goal of this project, it’s still pretty cool to have numbers to analyze regarding outliers. But the story isn’t over yet… Now we’re doing the most fun part: model training and selection. Stay tuned for next time, when you’ll get to see the final product.

Read More
AJ Sandhu AJ Sandhu

Real Good Reels

Using social media to build an online community

Have you checked Real Good AI out on Instagram, TikTok, Twitch, or YouTube yet? You can find everything from memes to research papers. [like our Fishion (Fish Mission) Statement] We are working really hard to set the tone of fun + science and would love your feedback and encouragement. 😌

It’s also the fastest way to get your questions answered by our team of PhDs. [They don’t know I’m saying this yet, but if you see it then it got approved]

Here’s why you should hit follow (and subscribe, and the bell, and the like button)

  • Direct access to PhDs: Got burning Data Science or just like any science questions? Comment on our posts and get answers from the experts!

  • Gaming and More: Join us most Thursdays on twitch for team streams and answers to those questions (and more!)

  • Charity for Charities: Find out how we help other nonprofits with posts like this:

Coming THIS MONTH: Coloring Book Art Gallery! Keep an eye on our Art page and social media accounts. We’re cooking up something really colorful for the end of August!

We need YOU to make this work

When you like/share/comment you

  1. Help nonprofits find us for future potential partnerships

  2. Shape what we create next in directions that matter

  3. Show the big tech companies that ethics matters when it comes to AI

  4. Keep our scientists [and social media communicators *hint hint*] motivated. (seriously research and data science is sometimes solitary work, it’s really cool when people show how much the work means to them)

Catch you in the comments!

-The Real Good Crew [especially AJ the social media communicator🥺]

Let’s get connected →

[Instagram][TikTok][Twitch][YouTube]

Read More