A new startup called Onix is offering AI versions of renowned experts for personal guidance. For $100-300 annually, users can chat with digital doppelgängers of real-life gurus, ranging from therapists to health experts. Despite privacy protections, concerns linger about accuracy and the commodification of human expertise.
The company claims its technology encrypts conversations on the user’s device and limits hallucinations through guardrails. However, tests revealed limitations: when asked about NBA playoffs, an AI therapist wandered off topic, discussing indie bands instead.
While the idea of earning passive income from one’s expertise is appealing, critics argue it undermines trust in human advice and could lead to overreliance on chatbots for serious issues. Experts like Michael Rich see potential benefits but warn of the need for careful monitoring to prevent AI overstepping its bounds.
Onix’s launch marks a significant step in the integration of AI into personal guidance, but questions remain about the long-term impact on mental health and human interaction.







