We just hosted our first Community Hour, a new regular virtual meetup for everyone building, testing, and evaluating Gen AI agents and LLM applications. Join our growing community where testing is a collaborative conversation, not an afterthought.
A behind-the-scenes look at how we made Rhesis run anywhere and what we learned along the way. It started with a simple question from our first Objectives & Roadmap session: "Can I run Rhesis on my laptop without dealing with cloud credentials?"
Discover how Rhesis AI pivoted from enterprise SaaS to open source, what drove the rebrand, and the lessons every AI startup can learn about aligning brand, product, and community.
Artificial Intelligence (AI) is transforming numerous sectors, profoundly impacting task performance and decision-making processes. However, as AI's prevalence increases, so does the need for trustworthiness, i.e., ensuring that AI applications operate as intended and meet required quality standards.
As Gen AI technology, particularly Large Language Models (LLMs), continues to shape industries across sectors, it is crucial to understand how these applications perform in real-world scenarios and assess their overall quality and trustworthiness.
Over the past months, I attended more than 10 AI conferences, including PAKcon, the AIAI Summit, the AI & Data Summit, the Trustworthy AI Forum, and AICon.