Last Thursday's meetup was one of those nights where you walk in expecting a structured panel discussion and walk out three hours later still mid-argument in the parking lot. That's the best kind of meetup, honestly.
We gathered around 40 people—developers, a few business owners, a nurse practitioner, a high school teacher, and a handful of folks who just showed up because they saw the flyer at the library. That mix matters. AI ethics isn't a purely technical problem, and the conversation reflected that in ways that were sometimes uncomfortable, often illuminating, and occasionally a little heated.
Here's what stuck with us.
Bias Isn't a Bug You Can Just Patch
One of our regulars, a data scientist who works in healthcare analytics, kicked things off by walking through a real example—not hypothetical—where a clinical decision-support tool was consistently underestimating pain levels in Black patients. The room got quiet fast.
The thing is, the model wasn't broken in any traditional sense. It was doing exactly what it was trained to do. The problem was baked into decades of skewed medical data, and the team that built the tool didn't catch it because they weren't looking for it. Or maybe they didn't know to look. Either way, people were affected.
This sparked a long thread about where responsibility actually sits. Is it the data scientists? The hospital administrators who bought the tool? The researchers whose biased studies fed the training data in the first place? Spoiler: we didn't reach a clean answer. But the consensus was that "we used the data we had" is not a sufficient defense anymore—not when the stakes are this high.
Practical takeaway from the group: bias audits need to happen before deployment, not after something goes wrong. And they need to involve people outside the dev team—people who understand the context where the tool will actually be used.
Transparency Is Harder Than It Sounds
Somebody brought up explainability, which opened a whole other can of worms. There's this idea floating around that AI systems should be able to explain their decisions in plain language. Sounds reasonable. But the teacher in the room pushed back hard: "Explain to whom? In what language? At what level of detail?"
She had a point. A model explaining itself to a radiologist looks completely different than explaining itself to a patient, or to a judge, or to a city council trying to figure out if they should use predictive policing software. One-size-fits-all explainability is kind of a myth.
We talked about the EU AI Act for a bit—some people had read it, most hadn't, nobody fully understood it yet. But the general idea of tiered risk categories resonated with the group. Not every AI system carries the same stakes. A recommendation algorithm for a playlist is not the same as one influencing bail decisions. Treating them identically under some blanket policy doesn't make sense.
What we kept coming back to: transparency has to be designed for a specific audience with a specific purpose. Vague commitments to being "open" don't cut it.
The Consent Problem Nobody Wants to Talk About
Okay this one got spicy. A developer in the group raised the question of training data consent—specifically, whether it's ethical to train large language models on text that people wrote without knowing it would be used that way.
The room split pretty cleanly. Half the people felt that publicly posted content is fair game—if you put it on the internet, you gave up some control over it. The other half felt that's a convenient rationalization that ignores how people actually think about their own writing.
The nurse practitioner said something that's still rattling around in my head: "When I post in a medical forum trying to help a colleague, I'm not consenting to train a product someone will sell. I'm trying to help a person."
There's no clean resolution here. But the conversation pushed us toward thinking about consent not as a checkbox but as an ongoing relationship. Opt-out mechanisms, clearer data provenance, maybe even compensation models—these aren't wild ideas. Some companies are already experimenting with them.
Who's Actually at the Table?
The last big theme of the night was representation. Who's building these systems? Who's deciding what counts as ethical? And who's most affected when things go wrong?
The uncomfortable answer, which most people in the room already knew but said out loud anyway, is that the people building AI are not demographically representative of the people using it—or being used by it. That gap has consequences.
One person made the argument that ethics boards and advisory panels are often performative. They exist to check a box, not to actually change decisions. That's cynical, maybe. But it's not wrong often enough to dismiss.
What can smaller organizations and communities do? A few ideas floated around: partnering with community organizations when building tools that affect specific populations, actually paying community advisors instead of asking for volunteer labor, and being willing to kill a project if the community most affected says it's harmful. That last one is the hard one.
So Where Does This Leave Us?
Honestly? With more questions than answers, which is probably the right outcome for an ethics conversation. The goal isn't to walk away with a tidy framework you can apply to every situation—it's to build the habit of asking harder questions earlier in the process.
A few things we'd encourage every AI practitioner to carry into their work:

- Run bias audits before deployment, not after. And involve people who actually understand the use context.
- Design explainability for a specific audience. Who needs to understand this, and what do they need to know?
- Take data consent seriously. "It was public" isn't the same as "it was consented to."
- Diversify who's in the room when decisions get made. And compensate them fairly.
We're planning a follow-up session focused specifically on AI in local government—there's a lot happening in New Hampshire on that front that deserves its own night. Keep an eye on the events page.
If you were there Thursday, drop your thoughts in the comments. And if you missed it—come next time. These conversations are better with more voices in the room.
