AI Output
Any draft, summary, recommendation, classification, generated content, or other machine-generated output produced by AgentAlly.
AI Policy Foundation
This public AI policy is written as AgentAlly's behavioral truth layer: what the AI helps with, what it does not do, when humans must review and approve, what uses are prohibited, and why current assistive behavior should never be mistaken for default autonomous operation.
Launch foundation draft for counsel, product, and trust-review alignment. This is the plain-English behavioral boundary layer for AgentAlly's current assistive AI posture. If direct-recipient or autonomous modes are ever introduced, they must be enabled and documented separately rather than inferred from this page.
These terms keep the AI boundary language consistent with the legal and product surfaces around it.
Any draft, summary, recommendation, classification, generated content, or other machine-generated output produced by AgentAlly.
Any email, SMS, call script, direct message, document transmittal, marketing asset, or other content intended to be sent, shared, or delivered outside the Service.
A specific External Communication that an authorized user reviewed and affirmatively approved for transmission or release.
The current launch posture in which AgentAlly helps prepare, summarize, organize, and recommend work while humans retain approval and final-decision authority.
Any future mode in which AI could communicate directly with an external recipient, auto-reply, or send without the current per-item approval flow. This is not the default posture described by this policy.
Section 1
This policy explains, in plain English, how AgentAlly's AI features are meant to behave at launch. It is the user-readable behavioral policy layer that sits alongside the Terms of Service and Privacy Policy, rather than replacing them.
The goal is to make the biggest trust questions obvious: what the AI helps with, what it does not do, when a human must step in, what uses are prohibited, what should not be relied on, what happens when the system is wrong, and how future direct-recipient or autonomous modes would be treated separately.
This policy is intentionally product-specific. It is not a generic model-provider policy, not a full statutory digest, and not the place where AgentAlly tries to hide launch-critical boundaries in fine print.
Section 2
AgentAlly may generate drafts, summarize notes and communications, organize work and context, recommend next steps, surface priorities, and help stage outbound actions for review. The product is designed to help licensed professionals move faster on preparation and coordination while staying inside visible trust boundaries.
That can include turning voice notes into structured follow-ups, preparing review-ready email or SMS drafts, summarizing transaction context, organizing contact or deal history, and recommending a next move or workflow sequence based on workspace context.
In other words, AgentAlly is built to assist thinking and preparation. It is not presented as an invisible substitute for human judgment or approval.
Section 3
Users remain responsible for reviewing AgentAlly outputs before relying on them, using them, or approving them. Human review is not ceremonial. It means the user has a meaningful chance to inspect the actual content, change it, reject it, or replace it before a sensitive action occurs.
External Communications require human review and approval before sending unless a separately documented feature clearly provides otherwise. A draft becomes an Approved Communication only when a user intentionally approves that specific send or release.
Connected accounts, saved preferences, templates, or prior approvals do not create blanket permission for silent future sends in current Assistive Mode. If content changes, approval needs to track the reviewed version rather than an older or different draft.
Section 4
AgentAlly does not guarantee accuracy, completeness, timeliness, originality, compliance, or suitability for a specific client, transaction, or jurisdiction. It does not become a broker of record, attorney, lender, title company, escrow provider, or other licensed professional just because it generated a draft or recommendation.
AgentAlly does not silently become a default autonomous sender because an integration exists or a workflow has been configured. It may prepare or stage outbound actions, but current launch behavior should not be read as authority to act without review.
AgentAlly also does not make final consequential decisions on the user's behalf. The product can help frame choices, but it should not be the final deciding authority for regulated communications, housing decisions, contract positions, disclosures, or other sensitive judgment calls.
Section 5
AI Output may be inaccurate, incomplete, outdated, inconsistent, biased, generic, or inappropriate for the task at hand. The same prompt may produce different results, and outputs may reflect missing context, ambiguous instructions, flawed source data, or stale information.
Users must not rely on AgentAlly as the sole or primary basis for consequential decisions in sensitive domains. That includes decisions affecting legal obligations, housing access, contract positions, regulated communications, disclosures, or other matters where qualified human review and lawful authority are required.
When AgentAlly is wrong, the intended workflow is to slow down, not push through. Edit the output, reject it, replace it, or escalate it to the right human reviewer. Do not treat the existence of an AI draft as evidence that the draft is safe to use.
Section 6
AgentAlly is not legal advice, not definitive contract or disclosure interpretation, and not a substitute for required attorney, broker, lender, title, escrow, tax, insurance, appraisal, or other licensed-professional review.
The product may help prepare materials that later receive professional review, but it should not be treated as the final authority on what a contract means, whether a disclosure is sufficient, whether a communication satisfies law, or whether a transaction decision is appropriate.
Users remain responsible for deciding when human supervision, broker sign-off, or professional review is required. This includes situations involving fair housing, state licensing rules, legal rights, trust money, disclosures, contract execution, or other materially sensitive matters.
Section 7
The following uses are prohibited because they conflict with AgentAlly's launch posture, legal boundaries, and real-estate-sensitive risk surface. These examples are intentionally specific to the product and are not the only prohibited uses.
Section 8
AgentAlly must not be used for discriminatory housing advertising, steering, targeting, segmentation, recommendation, or communication practices. Protected-class discrimination and proxy discrimination are out of bounds even if they are framed as marketing optimization or workflow efficiency.
Users must review listing copy, neighborhood summaries, recommendation queues, follow-up suggestions, and staged communications for language or logic that could exclude, channel, discourage, or preference people based on protected characteristics or close proxies.
Examples of risk areas include neighborhood descriptions, school commentary, "safe" or "family-friendly" positioning, lead prioritization, audience segmentation, or any attempt to infer who belongs in or should be kept away from a particular property, area, or outreach stream.
Section 9
AgentAlly is an approval-gated drafting and workflow product, not a universal compliance engine. Users remain responsible for sender identification, consent, unsubscribe or revocation handling, quiet-hours restrictions, recording rules, platform policies, and any other obligations that apply to their outreach.
The Service must not be used for fake reviews, spam, unlawful telemarketing, consent-bypassing outreach, impersonation, or misleading claims about who wrote, approved, or sent a communication.
If a law, brokerage rule, MLS rule, platform rule, or transaction context requires AI involvement to be disclosed, users remain responsible for making that disclosure. AgentAlly's own AI disclosure labels or document footers can support transparency, but they do not automatically resolve every downstream disclosure obligation.
Section 10
The table below shows how AgentAlly's core AI functions should be interpreted in current Assistive Mode. The pattern is consistent: the AI can help prepare and organize work, but the human keeps responsibility for approval, final judgment, and sensitive decisions.
Section 11
Correction and override are intended parts of the workflow, not signs that the product failed just because a human changed the draft. If AgentAlly produces content that is wrong, incomplete, stale, biased, unsafe, or simply not a fit for the situation, users should edit it, reject it, replace it, or decline to send it.
Users should report harmful, discriminatory, misleading, or otherwise problematic outputs through the available support path so the issue can be reviewed. Until a dedicated trust or policy inbox is published, launch-review questions or reports can be sent to the contact listed on this page.
If an issue may have legal, compliance, or client-impact consequences, the user should also follow their own correction, disclosure, escalation, and recordkeeping obligations. AgentAlly does not take over those obligations just because the first draft involved AI.
Section 12
AgentAlly may keep records of generations, prepared actions, edits, approvals, rejections, send attempts, send results, audit events, and related metadata for support, security, compliance, dispute resolution, abuse prevention, and product integrity purposes.
These records are meant to preserve accountability and help explain what happened, especially around approval-gated workflows. They do not mean that AgentAlly staff manually reviews every output before it is used. The primary control remains the user's own review and approval.
Some product surfaces also include AI disclosure labeling or audit fields tied to generated content. Those controls help make AI involvement and approval history visible, but users still need to decide whether the final output is appropriate to use.
Section 13
This policy covers current Assistive Mode only. It should not be stretched to silently authorize future direct-recipient, auto-reply, auto-send, or other autonomous behavior.
If AgentAlly ever introduces Direct-Recipient Mode or a materially more autonomous workflow, that mode must be separately documented, separately enabled, and accompanied by clearer controls, disclosures, boundaries, and human handoff options appropriate to the risk.
Until then, the existence of AI drafting, recommendations, staged actions, or current approval-gated sends should not be interpreted as permission for silent autonomous operation.
This section should be re-reviewed after Lane C approval-boundary evals and security review so the public language continues to match the enforcement layer.
Section 14
AgentAlly may update this policy as the product, provider stack, trust controls, or legal posture changes. Material changes should be reflected through the website, product, email, or another reasonable notice method before the updated policy becomes operative, unless earlier changes are required for security, abuse prevention, or legal reasons.
Questions, launch-review feedback, or issue reports about this draft can be sent to ben@getagentally.com. A more specific trust, compliance, or policy contact path may be added later without changing the core behavioral boundaries described here.
Until the final launch version is published, questions about this AI policy foundation draft can be sent to ben@getagentally.com.