AI_DISCLAIMER
CRITICAL: AI IS NOT A THERAPIST
BalanceBoard's AI features are organizational and supportive tools. They are NOT licensed mental health professionals, medical providers, or clinical counselors. AI responses do NOT constitute medical, mental health, legal, or financial advice. If you are in crisis, please contact 988 or text HOME to 741741 immediately.
1. What the AI Is
BalanceBoard features two AI-powered tools built on Anthropic's Claude language models:
- AI Buddy: An intelligent conversational assistant that helps you manage academic tasks through natural language, detects stress signals in your messages, and navigates you to BalanceBoard features. It reads your current task list and recent wellness data to provide context-aware responses.
- AI Wellness Coach: An analytical tool that reviews your last 7 days of wellness check-ins (mood, sleep, stress, energy) and provides personalized, evidence-informed suggestions for improving your well-being — such as sleep hygiene tips, study break strategies, or breathing exercises.
Both tools are powered by Anthropic's Claude 3.5 Sonnet, a state-of-the-art large language model (LLM). Like all current AI systems, Claude operates on pattern recognition and probabilistic text generation — it does not "think," "feel," or have genuine understanding of your situation.
2. What the AI Is Not — Critical Limitations
Not a Licensed Professional
The AI is not a licensed therapist, psychologist, counselor, social worker, psychiatrist, physician, or any other credentialed health or mental health professional. It has no professional licensure, cannot diagnose any condition, and its responses do not constitute treatment.
Not a Medical Device
BalanceBoard has not been evaluated by the U.S. Food and Drug Administration (FDA) or any equivalent regulatory body as a medical device, digital therapeutic, or clinical decision support tool. It is a consumer wellness application.
Not Infallible or Always Accurate
AI language models can generate confident-sounding responses that are factually wrong, outdated, or contextually inappropriate. Always verify health or wellness advice with a qualified professional. Do not act on AI suggestions alone for important health decisions.
Not a Crisis Service
The AI cannot contact emergency services on your behalf, alert a counselor, or take real-world action in a crisis. If you are in danger, call 911. If you are experiencing suicidal thoughts, call 988.
Not a Tutor or Homework Helper
The AI is explicitly programmed to refuse academic assistance that would constitute cheating, including solving math/science problems, writing essays, or completing assignments. This is a hard guardrail built into the system.
Not Always Available
AI features depend on API availability from Anthropic. Service may be interrupted. Do not rely on AI features for time-sensitive needs.
3. AI Guardrails & Safety Design
We have built extensive safety guardrails into our AI system:
Homework Refusal
The AI is hard-coded to politely refuse requests to solve homework problems, write essays, or complete academic work. This is not configurable by users.
Forbidden Topic Filters
The AI will not engage with political debates, religious arguments, romantic/dating advice, medical diagnoses, financial advice, or legal advice. It redirects these conversations to school-related topics.
Crisis Detection & Escalation
The AI is trained to recognize explicit and implicit indicators of distress (e.g., "I can't do this anymore," "nobody cares," "I want to give up"). When detected, it surfaces crisis resources including the 988 Lifeline and recommends the in-app Emergency Toolkit.
Message Length Limits
AI analysis is limited to concise inputs. Attempts to "jailbreak" or manipulate the AI through lengthy prompt injection are mitigated by input restrictions.
No Relationship Simulation
The AI does not simulate romantic relationships, deep personal bonds, or substitute for human connection. If you are lonely or need peer connection, the Connect Hub (Study Pods, Peer Support) is a better tool.
4. How AI Uses Your Data
When you interact with AI features, the following data is sent to Anthropic's API:
- Your messages and the AI's responses (conversation history for context)
- A structured summary of your current task list (titles, due dates, priority levels)
- Your current and recent wellness metrics (stress level, mood score, sleep hours — as numeric values, not your identity)
- Platform context (which part of BalanceBoard you're using)
What is NOT sent to Anthropic:
- Your full name, email address, or Google account information
- Your school name or grade level
- The content of your anonymous Vent Room or Peer Support posts
- Any data from other users
- Uploaded files from Study Pods
Anthropic's Data Policy
Anthropic does not use data submitted through their API to train their AI models by default. This is part of our API agreement with them. For complete details, review Anthropic's Privacy Policy.
5. Legal Context & Regulatory Landscape
The use of AI in student wellness applications is an evolving legal area. BalanceBoard has designed its AI features with awareness of the following legal and regulatory context:
FTC Guidelines on AI Disclosures
The Federal Trade Commission requires that AI tools be clearly disclosed as AI, particularly in consumer-facing applications. BalanceBoard complies by clearly identifying all AI interactions and maintaining this disclaimer.
FDA Digital Health Policy
The FDA has indicated that AI/ML tools that make clinical decisions may be regulated as medical devices. BalanceBoard's AI is explicitly designed to not make clinical decisions — it supports wellness habits and task management, not diagnosis or treatment. We continuously monitor FDA guidance in this space.
In re: Social Media Adolescent Mental Health Litigation (MDL 3047)
This landmark federal MDL involves claims that social media platforms caused mental health harm to minors through addictive design. BalanceBoard is fundamentally different: we do not use engagement-maximizing algorithms, we do not show ads, we do not exploit behavioral data for retention, and our explicit design goal is to reduce student stress, not increase time-on-app.
AI Companion & Chatbot Cases
Cases involving AI companion apps (like Character.AI) and teen mental health have raised questions about platforms' duty of care when AI encourages unhealthy emotional dependence. BalanceBoard's AI is strictly scoped to task management and wellness monitoring — it does not simulate relationships, offer emotional companionship, or engage in open-ended conversations outside its defined scope.
6. User Responsibilities When Using AI
By using AI features on BalanceBoard, you agree to:
- Use AI suggestions as one input among many, not as authoritative guidance
- Consult a qualified healthcare professional for any mental health concerns that go beyond daily stress management
- Not attempt to use AI features to obtain medical diagnoses, clinical advice, or substitute for therapy
- Not attempt to manipulate, "jailbreak," or circumvent AI safety guardrails
- Report inappropriate AI responses to support@balanceboard.app
- Understand that AI responses about your wellness are based on limited data (your self-reported check-ins) and cannot account for your full medical/psychological history
7. Opting Out of AI Features
You may disable AI features at any time in Settings → AI Preferences. Disabling AI will:
- Stop the AI Buddy from being available in the chat interface
- Disable AI Wellness Coach insights
- Prevent any future data from being sent to Anthropic's API
- Not delete previously generated AI insights (you can delete those separately)
Disabling AI does not affect any other BalanceBoard features (tasks, wellness tracking, Connect Hub).
8. Reporting AI Problems
If the AI produces responses that are:
- Harmful, dangerous, or promote unsafe behaviors
- Factually incorrect in important ways
- Discriminatory or offensive
- Excessively emotionally manipulative
- Attempting to simulate a personal relationship
Please report immediately to: support@balanceboard.app with subject "AI Safety Report." Include a screenshot or description of the response.
Crisis Resources — If You Need Help Now
988 Suicide & Crisis Lifeline
Call or text 988 (US)
Available 24/7
Crisis Text Line
Text HOME to 741741
Available 24/7
Trevor Project (LGBTQ+)
1-866-488-7386
Text START to 678-678
SAMHSA Helpline
1-800-662-4357
Free, confidential
Teen Line
310-855-4673
Text TEEN to 839863
Emergency Services
911
For immediate danger