State bar guidance on AI in 2026 is clearer than most attorneys assume. The American Bar Association issued formal ethics guidance in 2024 establishing that lawyers must understand AI's capabilities and limitations and verify all AI-generated output. By March 2026, California, New York, Florida, Illinois, and New Jersey have all issued formal ethics opinions on AI use — and the operating principles have converged enough that "the rules are unclear" is no longer an accurate description of where things stand.
This article walks through what state bar guidance actually says about AI case intake, what the ABA's 2024 formal opinion requires, the seven-point compliance checklist that maps to the live guidance, and a state-by-state look at where the rules are currently most developed.
Three sources now anchor most of the ethics conversation around AI intake in US law firms:
- The American Bar Association's 2024 Formal Opinion on AI and supporting practice guidance, which set the federation-level baseline.
- State bar formal opinions and ethics committee guidance — currently most developed in California (State Bar of California Practical Guidance for AI), Florida (Florida Bar Ethics Opinion 24-1), New York (NYSBA Task Force on AI), and supporting guidance from Illinois and New Jersey.
- ABA Model Rules 1.1 (competence), 1.6 (confidentiality), 5.1 / 5.3 (supervision), 7.1 (advertising), and 8.4 (misconduct) — applied to AI as a "tool" rather than as a separate category of regulated activity.
The federation-level message across all of these sources is consistent: AI is a tool. The lawyer is responsible. The client must know they are interacting with AI when they are. Everything else is implementation detail.
The most common misunderstanding in 2026 is that AI in client intake is broadly prohibited. It is not. The ABA Standing Committee on Ethics and Professional Responsibility's guidance is clearer than most state-bar synopses suggest: a lawyer using a generative AI chatbot for advertising and intake must inform prospective clients that they are communicating with an AI program. Beyond the disclosure requirement, AI may handle:
- Initial intake — capturing names, contact info, jurisdiction, basic facts about the matter.
- Practice-area screening — confirming whether the matter falls within the firm's accepted practice areas.
- Conflict-of-interest screening — running the prospective client and adverse parties through the firm's database.
- Scheduling — booking a consultation with a licensed attorney.
- Confirmation messaging — sending automated text or email reminders.
What AI may not do — anywhere, in any state — is render legal advice, quote case outcomes, set fees in a way that constitutes a definitive engagement, or claim that the AI's analysis is superior to other AI tools without objectively verifiable evidence. The boundary is between intake and counsel: intake is fine with disclosure, counsel is not.
Convergent guidance across the ABA and 5 state bars (California, NY, FL, IL, NJ) as of early 2026
Always verify the rules in your specific jurisdiction. State bar guidance evolves — this summary represents convergent guidance as of early 2026 and is not legal advice.
Some state bar guidance has labeled client intake as a heightened-risk area. This is sometimes summarized as "intake is a Red Light." The framing is misleading without the full context — what bar associations have actually flagged is unsupervised, ungated AI intake where the AI has authority to give legal opinions or commit the firm to a representation. That configuration is the prohibited one.
The compliant configuration is structurally different: AI handles the intake conversation, captures the data, screens for conflicts, and books the consultation; a licensed attorney remains the only party giving legal advice and the only party able to accept a representation. The lawyer reviews the intake brief before the consultation, conducts the consultation themselves, and decides whether to engage. Under that configuration, every state bar that has issued 2026 guidance permits AI intake.
For solo and small firms deploying AI case intake in 2026, the practical compliance checklist generally looks like this:
- Disclosure on first contact. The AI identifies itself as an AI assistant within the first turn of the conversation, in plain language. ("Hi, I'm an AI assistant for [Firm]. I'll take some initial information so an attorney can follow up with you. Anything you share is treated as confidential.")
- No legal advice from the AI. The intake script is hard-gated against opinions on the merits, recoverability, settlement value, or jurisdiction-specific legal questions. Those queries trigger an immediate offer to schedule with the attorney.
- Conflict-screening before intake completes. Names, opposing parties, and matter type are checked against the firm's conflict database. If a conflict surfaces, the conversation ends and the prospective client is notified.
- Confidentiality and data-handling disclosure. The prospective client is told what data is being captured and how it is stored. Any AI vendor used for intake should sign a business associate agreement (or equivalent confidentiality contract) and the data should not be used to train external models.
- Verification of AI output before any external use. No AI-generated text — citations, summaries, draft letters — leaves the firm without a licensed attorney verifying it. This applies to anything customer-facing, not just court filings.
- Supervision under Model Rules 5.1 and 5.3. The attorney supervising the AI tool is responsible for its outputs as if it were a non-lawyer assistant. Documenting that supervision (regular review, sample audits) is the standard expected by state bars issuing 2026 guidance.
- Advertising compliance under Model Rule 7.1. Marketing copy referencing AI cannot make superiority claims that are not objectively verifiable. "Our AI is the best" is out. "Our AI handles after-hours intake 24/7" is fine because it is verifiable.
If your firm operates in multiple jurisdictions, the rules are not yet uniform across all 50 states. Here is the snapshot for the five states with the most developed guidance, and the practical implication for AI intake:
"The compliant pattern is well-defined: disclose, escalate legal advice to humans, screen for conflicts, supervise the tool, and verify the output. Inside that pattern, AI case intake is the largest practical lift in solo and small-firm operations available in 2026."— Editorial summary
California
The State Bar of California issued comprehensive Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law. The guidance closely tracks the ABA framework, with particular emphasis on confidentiality (Rule 1.6) and supervision (Rule 5.3). California is also notable for its consumer-protection-flavored disclosure expectations — when a client interacts with a chatbot, the AI's identity must be clear and the client must know how their data is handled. AI intake configured per the 7-point checklist above is permitted.
New York
The NYSBA Task Force on Artificial Intelligence published its 2024 report and supporting guidance. New York's approach emphasizes confidentiality and the duty of supervision, with specific attention to the "non-lawyer assistant" framing under Rule 5.3 — the AI is treated effectively as a non-lawyer staff member whose work the supervising attorney must oversee. AI intake is permitted with disclosure and the rest of the checklist.
Florida
Florida Bar Ethics Opinion 24-1 explicitly addresses generative AI in legal practice. Florida has been particularly clear that lawyers must "obtain affected client's informed consent" before using GenAI in matters where confidential information is involved, and lawyers must verify the accuracy of AI-generated output. For intake specifically, the disclosure-and-supervision framework applies.
Illinois
Illinois has supporting guidance through ISBA committee opinions emphasizing the same convergent principles: competence, confidentiality, supervision, and disclosure. Illinois firms in 2026 should follow the same 7-point checklist with attention to the state's specific advertising rules under Illinois Rule 7.1.
New Jersey
The New Jersey Supreme Court issued guidance affirming that lawyers may use AI tools, with the standard caveats: maintain confidentiality, verify accuracy, and supervise the work product. New Jersey's framework is fully compatible with the AI intake configuration described in this article.
For states without specific 2026 guidance, the ABA framework plus existing Model Rules apply. As guidance is issued in additional states throughout 2026, expect convergence with the framework outlined in California, Florida, and New York.
Most US firms deploying AI intake in 2026 are not building it themselves. The tools that map cleanly onto the compliance checklist above tend to share a few features:
- The disclosure prompt is built into the AI's first turn — not an optional configuration that can be skipped.
- Hard-coded escalation triggers move legal-advice questions to a human queue automatically.
- Conflict-screening hooks are in the intake form itself, with field mapping to the firm's matter database.
- All conversation transcripts are logged for the lawyer's review and for state bar audit if requested.
- The AI vendor signs a vendor confidentiality / BAA equivalent and confirms in writing that intake data is not used to train external models.
Firms running TheBigBot's legal CRM ship with these defaults configured at delivery — disclosure on first turn, hard escalation on legal-advice queries, conflict-screening prompts, full transcript logging, and a vendor confidentiality posture that is reviewable by the firm's compliance partner. The AI receptionist, intake CRM, scheduler, and review-harvesting workflows live in one login that is typically live in 3 days.
The compliance question is settled enough that the practical question — "where does this actually move the needle?" — is now the more useful one. Across US solo and small firms in 2026, the highest-impact AI intake deployments share three properties:
- Practice areas with high after-hours call volume. Personal injury (post-accident calls), family law (private/evening calls), immigration (cross-time-zone calls). These are the areas where the 24/7 capture is decisive — and where the existing answering-service or voicemail option leaks the most leads.
- Firms with high marketing spend and uneven intake capacity. A firm running $10K+/month in Google Ads and routing calls through a single paralegal cannot mathematically capture the leads it is paying for. AI intake reduces the staffing bottleneck without adding headcount.
- Multi-language client populations. Spanish-speaking, Mandarin-speaking, Arabic-speaking, and other non-English-primary client populations are systematically underserved by English-only answering services. AI intake handles the language switch natively.
Three common implementation errors that turn an otherwise-compliant AI intake into a state bar risk:
- Skipping the disclosure prompt because "it slows down the conversation." Disclosure is non-negotiable across every state bar that has issued guidance. Configure it to land in the AI's opening line and do not let it be optional.
- Letting the AI quote fees for specific matters. Generic "our consultations are $X" or "our typical retainer range is $Y to $Z for this type of matter" is fine. "Your case will cost $15,000" is the line that crosses into engagement territory and should always be a licensed attorney's call.
- Not logging conversations. If a state bar audit ever asks "what did the AI say to this prospective client" and you cannot produce a transcript, the supervision-record question becomes harder to answer. Full transcript logging is a baseline requirement, not a nice-to-have.
Does my state bar require attorneys to take CLE on AI ethics?+
Several states — including Florida, California, and New York — have moved toward AI-specific competence expectations under Model Rule 1.1 (technological competence). As of early 2026, CLE programs covering AI ethics are widely available and increasingly recommended. Always check your state bar's most current guidance and CLE requirements before assuming any specific obligation.
Can an AI handle a fee quote during intake?+
Generally no. Quoting a specific fee for a specific matter often crosses into the "definitive engagement terms" zone where state bar rules expect a licensed attorney to be the speaker. Most compliant AI intake systems handle the practice-area-fit conversation, but defer the specific fee discussion to the attorney consultation. Always verify the rules in your jurisdiction.
What if my state hasn't issued AI ethics guidance yet?+
The federation-level baseline is the ABA's 2024 Formal Opinion plus the existing Model Rules. Most state bars treating AI as a "tool" — rather than a separate category — apply existing rules on competence, confidentiality, supervision, and advertising. When state-specific guidance does come out, it generally aligns with the ABA framework. Following the ABA framework is a reasonable default until your state issues specific guidance, and reviewing your bar's website periodically is the standard practice.
Do I have to disclose AI use to existing clients, or just prospective ones?+
Disclosure obligations apply to both, but the form differs. Prospective clients in an intake conversation should be told at the start that they are talking with an AI. Existing clients should generally be informed if AI is being used to handle aspects of their matter (drafting, research, scheduling) per Model Rule 1.4 (communication). Specific disclosure expectations vary by jurisdiction and by the type of AI use.
Where is the line between intake and legal advice?+
"Tell me what kind of case I have" is the line. AI may confirm a matter falls within a practice area the firm accepts (intake). AI may not characterize the merits, jurisdiction-specific legal classification, or potential outcomes (legal advice). When the conversation drifts toward "what should I do" or "how strong is my case," compliant AI intake systems escalate to a human attorney rather than respond.
Does my malpractice carrier care about AI intake?+
Increasingly, yes. Several major US legal malpractice carriers (LawyersFirst, ALAS, CNA, Travelers) updated their 2025 and 2026 underwriting questionnaires to ask about AI use. The expected answer is rarely "we do not use AI" — it is "we use AI responsibly with documented supervision." Having the 7-point checklist documented in your firm's AI policy is the answer most carriers want to see. Confirm with your specific carrier.
Can the AI receive privileged communications?+
Yes, if structured as a confidential agent under appropriate vendor agreements. The AI vendor functions analogously to a non-lawyer assistant under Model Rule 5.3 — bound by confidentiality, supervised by an attorney, and contractually prohibited from disclosing or training on the privileged content. The BAA / vendor confidentiality agreement is the contractual mechanism that makes this work.
Does using AI intake affect attorney-client privilege?+
Properly configured, no. The standard analysis treats the AI as a confidential agent of the firm, similar to a paralegal or receptionist. Privilege attaches to communications made for the purpose of obtaining legal services, regardless of whether the initial intake is taken by a human or an AI assistant — provided the standard confidentiality protections (vendor BAA, no external training, secure storage) are in place. Always confirm the analysis with your jurisdiction's specific privilege rules.
The state bar rules on AI in client intake have stabilized enough in 2026 that compliance is now a configuration question. The firms still treating "AI ethics" as a reason to keep using voicemail are not protecting themselves from a state bar action — they are protecting their competition's market share. The compliant pattern is well-defined: disclose, escalate legal advice to humans, screen for conflicts, supervise the tool, and verify the output. Inside that pattern, AI case intake is the largest practical lift in solo and small-firm operations available in 2026.
If you'd like to see what an ethics-compliant AI case intake configuration — with disclosure prompts, conflict screening, attorney-only legal advice, full transcript logging, and review-harvesting wired into one login — looks like running on your firm's lead flow, book a 20-minute demo. We will walk through the configuration for your jurisdiction and your practice areas.
This article is for general informational purposes only and does not constitute legal advice. Reading this article does not create an attorney-client relationship. Ethics rules vary by jurisdiction and change frequently — consult your state bar's most current guidance and a licensed attorney in your jurisdiction before deploying AI tools in your practice.
References & sources
- American Bar Association's 2024 Formal Opinion on AI — americanbar.org
