Frequently Asked Questions

  • Quick answer: No, we build targeted, efficient AI tools like surrogate models to solve real-world problems without the massive costs of generative AI.

    We are not building a generative AI model. When we talk about AI, we mean the full history and scientific breakthroughs of the past 65+ years, not just popularized neural networks.
    We don’t use the large, generative models most people think of. Those require enormous datasets, significant computing power, and operate as total black boxes. Instead, we focus on specialized, efficient tools that can help solve real-world problems like public health challenges, climate research, and nonprofit support. These problems often involve smaller datasets, making our work faster, more transparent, and more resource-conscious.

    One example is the surrogate model. In physics and climate science, researchers use complex simulations to predict outcomes. These simulations can take a lot of time and computing power to run. A surrogate model is trained on a limited set of results from the original simulation so it can produce similar results much faster and at far lower cost. This is non-generative AI, built for one specific problem and dataset, and works only in that targeted context.

    It’s an extreme challenge to get for-profit companies to stop focusing on popular generative applications, but we believe the fight is worth it!

  • Quick answer: We want AI to use less data, less energy, causing far less environmental impact.

    The environmental cost of AI is massive, but not unsolvable. Our general logic: start with smarter methods that need far less data and computing power leading to a vast reduction in environmental impacts.

    Machine learning needs a complete overhaul. Instead of neural networks, we use alternative models like Gaussian Process Models. These models build structure into the math so instead of estimating millions of parameters, we only estimate a few.

    Real Good’s Data Science team has institutional knowledge and spent years studying how to improve the scalability of the models. By leveraging techniques they developed in the past we can work toward a brighter future. Their methodology uses a small amount of data, accessing only parts of it at a time. (The famous MuyGPs!) If we can prove these methods work at scale, we can push for tech companies and other large polluters to adopt them.

    How much more efficient are we talking?
    Before founding Real Good AI, Dr. Amanda Muyskens collaborated with Dr. Imène Goumiri and Dr. Min Priest on a study showing about a 97% reduction in computational cost from their Gaussian Process Model compared to a convolutional neural network for a similar problem (read the paper). After launching the nonprofit, Dr. Muyskens invited both researchers to join her here, where the team is now working to apply the same approach to more problems and larger datasets, leading to even more efficiency.

  • Quick answer: AI should support, not replace, human creativity. We work with artists to protect their work, raise awareness, and fund creative projects.

    The way many tech companies are incorporating art into AI is deeply flawed. They’re trying to replace human artists — a mistake rooted in a fundamental misunderstanding of why humans create art in the first place.

    That’s not to say AI can’t have a place in the arts; it can, but as a tool built collaboratively by artists and scientists, shaped by their needs and creative processes.

    Right now, the industry’s approach is to “throw everything into the AI” and churn out amalgamations, harming working artists in real and personal ways:

    • Signature art styles, developed over years of practice, copied without consent and replaced by poor facsimiles.

    • Voice actors’ iconic and unique performances, cloned before they can protect themselves

    • Actors’ faces, performances, and bodies deepfaked; crossing boundaries of digital bodily autonomy.

    • Singers and musicians, replaced by digital voices that have never drawn a living breath.

    We don’t claim to have all the answers, but we’re asking the right questions. We bring artists and scientists together for honest, often difficult conversations, acknowledging harms and exploring how to safeguard and nurture human creativity.

    In that vein, stay tuned for our upcoming pilot programs: funding artists through commissions, fostering STEAM education for children, and using our platform to raise awareness in the scientific community. We are listening. If you have ideas for advancing human creativity, we want to hear them!

We know this work is hard. Turning away from neural networks means tackling serious technical challenges and we can’t do it alone. We need all hands on deck — scientists, engineers, artists, educators, and community voices — to make this happen!

It is worth the fight to protect the environment, advance public welfare, and defend human creativity. Research is about trying, learning, and adapting and failure is an essential part of the process. Every “no,” roadblock, and dead end teaches us something we couldn’t have learned otherwise.

We will lose sometimes, and that’s okay. Those losses are what lead us to the wins that matter, creating real lasting change for the greatest number of people. We invite you to join us in learning from our losses and celebrating our wins.