Data Use & Storage
-
Q: Is my data secure with Claude?
A: Yes, your data is secure. Anthropic runs an automated abuse detection system for safety purposes only and only reviews conversations if they are flagged by the detection system. Champlain College administrators will not access your conversations or requests unless you share them and will only have anonymized data regarding platform usage. No party will share or sell your information.
-
Q: Will Anthropic use my inputs to train their models?
A: Anthropic does not train its models using inputs from Pro or Enterprise accounts. The publicly available data and user-contributed data referenced in our training materials refers to free-tier users and research participants, not institutional partners.
-
Q: What level of confidential data can I upload to our Claude environment?
A: Any sensitive or proprietary data that you upload to Claude is secure unless you share it with another user, for example by sharing a conversation or a custom Claude Project. Regardless, never input Level 2 (private) or Level 3 (restricted) data (as outlined in our Data Classification Policy) into any generative AI systems.
-
Q: What happens to my archived information after I no longer have access (for example, because I’ve graduated)?
A: There is currently no automated way to migrate your data if you graduate from or leave Champlain College. If you have a personal Claude account, you can share conversations and projects with that account or manually copy and paste custom instructions and prompts.
About Claude
-
Q: Am I required to use Claude?
A: No, Claude is a tool, and using it is optional.
-
Q: How can I learn more about using Claude?
A: You can find more information on using Claude at Anthropic’s Help Center.
-
Q: What are Claude’s unique advantages compared to other GenAI tools?
A: People love Claude’s conversational tone and appreciate its coding skills. Anthropic, the creator of Claude, has also focused deeply on creating AI models that are safe and reliable.
-
What Features Are Included in My College-Sponsored Education Plan?
A: Your university-sponsored Claude for Education account includes:
- Enhanced context window: Upload hundreds of pages of text (up to 500k tokens with Claude Sonnet 4) for analyzing lengthy academic papers, research documents, and datasets.
- Advanced models: Access to the newest, most advanced Claude models.
- Projects feature: Create and organize multiple related conversations with shared knowledge bases.
- Increased usage limits: More messages per day compared to individual plans.
- Priority access: Priority access during high-traffic periods.
- File uploads: Analyze various document types including PDFs, DOCX, CSV, TXT, HTML, and more.
Error & Warning Messages
Environmental impact
-
Q: What is the environmental impact of using Claude?
A: For all sustainability related questions, please reference Anthropic’s Claude 4 model card. Anthropic partners with external experts to conduct an analysis of our company-wide carbon footprint each year. Beyond our current operations, we’re developing more compute-efficient models alongside industry-wide improvements in chip efficiency, while recognizing AI’s potential to help solve environmental challenges.
Copyright and Intellectual Property Protection
-
Q: How does Anthropic handle copyright concerns? Are you training on copyrighted content, and could using Claude expose us to IP infringement risks?
A: Anthropic takes copyright protection extremely seriously and has implemented industry-standard safeguards to address copyright concerns.
Our copyright protection approach:
Training Data:
- Anthropic does not train its models using inputs from Pro or Enterprise accounts. We use publicly available data, data we acquire under commercial agreements, data that our free-tier users, research participants or crowd workers provide, and data that we generate internally for training, with careful curation and filtering processes.
- We respect robots.txt preference signals and other standard web protocols that indicate site owners’ preferences
- We work with publishers and content creators who have concerns about their content
- We’ve implemented technical measures that can reduce memorization of training data
Output Protection:
- Claude is designed to avoid reproducing substantial portions of copyrighted content
- Our models are trained to decline requests for verbatim reproduction of copyrighted works (like song lyrics, books, etc.)
- We continuously improve our systems to detect and prevent potential copyright violations in outputs
Customer Protection:
- Our Commercial Terms of Service include standard indemnification provisions for customers using our services as intended
- We provide clear usage guidelines to help customers use Claude responsibly
- We offer guidance on best practices for avoiding potential IP issues in AI-generated content
Ongoing Commitment:
- We engage constructively with rights holders who have concerns
- We participate in industry discussions about responsible AI development and copyright
- We continuously invest in research and technology to improve copyright protection
Best Practices:
- Don’t ask Claude to reproduce specific copyrighted works
- Treat Claude-generated content as a starting point requiring human review
- Apply your organization’s existing IP review processes to AI-assisted work
- Use Claude’s capabilities for analysis, brainstorming, and enhancement
- We see this as an evolving area where technology, law, and industry norms are still developing. Our commitment is to be a responsible participant in these discussions while continuing to develop beneficial AI technology.