euukrainetechsummit2025

Bringing Responsibility to the AI Table: Olga Provotar Speaks at EU-Ukraine Tech Summit 2025

Olga Provotar shared vital insights on responsible AI at the EU-Ukraine Tech Summit, emphasizing ethics, security, and proper implementation for founders.

This year’s EU-Ukraine Tech Summit in Warsaw brought together some of the brightest minds shaping the future of technology—and our co-founder, Olga Provotar, was right there in the thick of it. As a panel speaker on the topic “Responsible AI: What AI Product Founders Should Care About When Introducing Their Solutions to Customers,” Olga shared not just insights, but real-world experience from the trenches.

“I got my PhD in AI before it became mainstream,” Olga began with a smile, setting the tone for a conversation that mixed technical depth with practical wisdom.

At HUSPI, we specialize in digitalizing offline processes using AI tools. That means we’ve seen AI evolve—from hype to actual how. Whether it’s automating patient records in healthcare or generating property descriptions from photos in real estate, we’ve learned that the tech only works if it’s responsibly implemented.

So, what should AI product founders really care about?

According to Olga, it comes down to one simple but powerful principle: Pair every AI solution with clear usage policies and training materials. It’s not optional—it’s essential.

Why It Matters

1. AI Can Inherit Bias and Spread Misinformation

Olga shared a personal story that hit close to home. “My daughter recently asked ChatGPT to draw ‘Spring in Kyiv.’ The image that came back showed Pecherska Lavra—our iconic Ukrainian landmark—placed next to the Kremlin.”

It’s not just a glitch. It’s a window into a deeper problem: AI tools are trained on vast datasets, which often include biased or even harmful narratives. Without intervention, these tools can unintentionally reinforce falsehoods.

“AI doesn’t know truth—it knows patterns,” Olga explained. “And those patterns are based on human-generated data, not always grounded in fact.”

2. Security Isn’t Optional

AI tools don’t just help with content—they’re often used to process sensitive business data. And that brings big risks.

“Imagine your team is feeding internal analytics into an AI tool hosted on someone else’s server,” Olga said. “That’s your company’s core data. Do you really know where it’s going—or who might be learning from it?”

The concern isn’t hypothetical. With limited transparency into how AI models store and use data, businesses risk exposing themselves to competitive threats or compromised decision-making.

So What Can Founders Do?

“Set boundaries,” Olga urged. “Give your clients and teams not just a tool, but also a manual. Explain how to use it, what’s allowed, and—just as importantly—what’s off-limits.”

And beyond policies, foster critical thinking. Encourage people to question AI outputs, challenge results, and never treat machine-generated information as absolute truth.

As AI continues to evolve, so must our approach to using it. Responsibility isn’t a feature—it’s a foundation.


At HUSPI, we believe that building with AI means building responsibly. If you’re looking to develop AI-powered solutions that are not only innovative but also ethically grounded and secure, we’re here to help.

👉 Let’s build something smart—and responsible—together.

Book a call with our experts

Feel free to drop us a message regarding your project – we’re eagerly looking forward to hearing from you!