AllVoices AI is built with guardrails that reduce risk—not create it. From strict data handling policies to a human-in-the-loop design, our system is intentionally structured to support compliance, protect sensitive information, and ensure AI is never making decisions on its own.
For any more questions, visit our Trust Center.
AI is a helper, not a decision-maker
AllVoices AI is designed to support - not replace - your people, your decisions, and your process. AllVoices AI helps streamline workflows, surface relevant policies, and summarize factual information documented by humans.
Our AI never takes action or creates documentation without human review and approval. And our AI never makes decisions or draws conclusions from sensitive data. At the same time, by ensuring timely resolutions, standardizing documentation, and giving employees a fair process, AllVoices helps reduce the risks legal teams care about most. Below is a breakdown of the most common questions we hear from legal teams—along with how we intentionally address each one.
1. Does using AllVoices reduce legal risk?
Yes.
AllVoices and our AI implementation helps your team reduce legal risk, not increase it.
-
Ensures a consistent, compliant process across all employee relations cases
-
Helps teams resolve cases up to 70% faster, often de-escalating issues before they reach litigation
-
Ensures case timelines, evidence, and communication logs are organized and tamper-proof—critical if a case ever moves to litigation
In the event of legal action, everything you need is clearly documented and accessible to the appropriate parties - no digging through spreadsheets, Slack messages, and email chains.
2. Does AllVoices AI document recommendations?
No.
AI-generated recommendations are not logged or discoverable unless a human accepts them. Examples include:
-
Drafted employee responses
-
Suggested tasks or resolutions based on company policies
-
Surface-level recommendations like policy lookups
Ensures case timelines, evidence, and communication logs are organized and tamper-proof—critical if a case ever moves to litigation
In each case, the system requires explicit human review and confirmation. Unaccepted or ignored suggestions disappear and are never stored in the case file.
3. Does AllVoices recognize Attorney Client Privilege?
Yes.
AllVoices has an Attorney Client Privilege option on all cases. This allows admins to identify any case that contains communication between legal (either internal or external) and other members of the company.
These conversations (including all notes, messages, etc) are considered privileged and not subject to discovery.
The Attorney Client Privilege feature enables your team to denote which cases have communication with attorneys that needs to be excluded in the event that the case is in litigation.
4. Does AllVoices AI make decisions?
No.
Our AI cannot take any action—such as documenting a recommendation, creating a task, or sending a message—without a human reviewing and approving it. AllVoices AI supports human decision-making. It does not replace it.
5. Does AllVoices AI write reports?
No.
Our AI drafts Case Summaries based on details provided by employees submitting reports or provided by admins creating cases.
Our AI summarizes the information provided and does not inject any additional information or draw any conclusions in the brief Case Summary.
Our AI also drafts Investigation Summary Reports, exclusively based on structured information, evidence, and interviews provided by the Investigator. Before a case can be closed, the Investigator must confirm they’ve reviewed, edited, and approved the report.
Recommendations, Resolutions, and Outcomes are determined by humans and kept completely distinct from the Investigation Summary Report, which is a factual summary of information provided; initially drafted by AI and edited/approved by a human Investigator.
Once a Case or Investigation is closed by the admin or Investigator, summaries and reports are locked and cannot be edited - ensuring integrity and preventing post-close tampering.
6. Does AllVoices help ensure consistency across cases?
Yes.
Vera references your company’s uploaded policies and previous case precedent to surface relevant information that supports consistent decision-making over time. This helps HR professionals apply standards more evenly and reduces the risk of:
-
Disparate outcomes
-
Inconsistent application of company policy
-
Claims of unfair or biased treatment during investigations
7. Does AllVoices actively prevent AI bias?
Yes.
Fairness and consistency are at the core of AllVoices’ mission. We know that bias - whether conscious or unconscious - is one of the most critical risks for HR and legal teams to manage, especially when handling sensitive employee relations issues. That’s why our AI implementation is carefully designed to reduce opportunities for bias:
-
Vera is explicitly instructed in multiple ways to avoid bias and to remain neutral in its responses. Additionally, information is provided in contexts that encourage neutral analysis and a fact-based approach in compiling information
-
The AI is limited to summarizing information manually entered by humans—it does not draw conclusions or speculate
-
By surfacing relevant policies and past precedent, Vera supports fair, consistent application of your standards across similar cases—reducing the influence of subjective decision-making
8. Does AllVoices actively prevent AI hallucinations?
Yes.
AllVoices AI does not access the internet and is restricted to:
-
Your uploaded company policies/handbook
-
Manually entered case details
-
General knowledge that the LLM is trained on
By working only with verified inputs, we significantly reduce the risk of hallucinated or fabricated content you see on consumer-facing LLMs which have fewer constraints and access to the internet.
Furthermore, AllVoices employs various tactics to prevent hallucinations including automated verifications of responses to compare against provided data and ensure the response is accurate and contains no hallucinations, carefully monitored automated testing when changes are made to prompts or logic to ensure responses meet strict requirements, and frequent manual testing to ensure hallucinations or potential symptoms of hallucinations do not occur.
9. Does AllVoices keep my company policies private and secure?
Yes.
AllVoices takes data privacy and security seriously. All data you upload to our platform—including your company’s policies—is stored securely in AWS infrastructure and protected using best-in-class security protocols.
We are SOC 2 Type II certified and GDPR compliant, meaning we meet rigorous standards for protecting data both at rest and in transit. Your policies are never shared outside your organization. They are used exclusively within your instance to support case handling, recommendations, and insights.
No other customer’s AI instance will have access to or benefit from your policies, and your policies are never used to train or inform the AI models in a shared or general way. Additionally, access to your data is tightly controlled, monitored, and restricted based on role-based permissions. We do not use your policy data for any purpose other than enabling functionality within your secure environment.
10. If I delete a policy, will AllVoices AI retain the information?
No.
When you delete a policy from your AllVoices instance, it is permanently removed from our system. Our AI does not retain, remember, or continue to reference any information from deleted policies in future tasks or responses.
The AI operates based on the current set of uploaded policies in your instance at the time a task is completed. It does not “remember” prior versions, nor does it store deleted content for future use. This ensures that your AI assistant is always referencing the most up-to-date policy information you’ve provided, and nothing else.
This behavior is intentional and important for compliance. It allows your team to maintain full control over what information is available to the AI and ensures that outdated or deprecated policies do not influence future case work or decision-making.
11. Does AllVoices allow customers control over AI features?
Yes
AllVoices allows customer to enable or disable AI features throughout the flow.
For example, if you do not want AI to recommend tasks you can disable the feature. If you do not want AI to identify policy gaps (even though these are not documented or discoverable), you can disable the feature.
12. Is my data used to train Large Language Models (LLMs)?
No
Your data is never used to train or fine-tune large language models (LLMs). Per OpenAI’s API and Enterprise Terms, data passed through their API is not retained or used to improve the model. AllVoices maintains strict internal data controls to ensure full confidentiality and compliance.
Visit our Trust Center for more information on our security practices and certifications.