Ask Your Agent Security Guidance

How to use AYA prompts without outsourcing judgment, trust, or control.

Back to Reference
AYA

What the AYA prompts are for

AYA prompts are intended to give you a visible, inspectable starting point for a focused conversation with your own AI assistant or agents. They are not instructions to trust the model, not evidence that the model understands your environment, and not a substitute for reading the underlying SyzygySys material directly.

ACE by SyzygySys has developed a safe way to extract more information, and hence gain more knowledge and better understanding by providing "Ask Your Agent" prompts indicated throughout reference material with the AYA logo. These provide a starting point for a focused chat with your AI assistant or agents. Using copy-paste to seed a prompt in your chat dialog means you can see exactly what you are submitting. Read both your input and your assistant's output carefully, verify claims against the source material, and treat generated responses as untrusted until reviewed. For security concerns, including prompt injection, review the linked guidance before acting on any output.
Core Rule

Read input and output carefully

Do not treat copied prompts as automatically safe simply because they are short or appear on a trusted page. Inspect what you are about to submit, and inspect the answer you get back before you rely on it, share it, or act on it.

Trust Boundary

Your model is still an untrusted interpreter

The model can misunderstand context, invent facts, overstate certainty, or fuse trusted and untrusted inputs into a plausible but unsafe answer. AYA narrows the question. It does not guarantee correctness.

Prompt Injection

What prompt injection looks like

Prompt injection happens when content inside a document, web page, transcript, or attached file tries to influence the model in ways you did not intend. This can include hidden instructions, adversarial phrasing, fake system messages, or content that tells the model to ignore prior rules and reveal secrets, change behavior, or recommend unsafe actions.

Practical Use

Safe operating checklist

  1. Read the copied prompt before you submit it.
  2. Limit the model's source material to what you intend it to read.
  3. Ask the model to cite or quote the exact source passage for important claims.
  4. Verify important outputs against the original SyzygySys page or primary source.
  5. Do not enter passwords, tokens, confidential contracts, or regulated data into consumer chat tools unless explicitly approved for that environment.
  6. Do not let an agent take external actions from an AYA-seeded conversation without separate human review.
Do Not

Do not rely on a model summary instead of reading a policy, contract, design note, or control statement that matters for legal, financial, security, or operational decisions.

Good Pattern

Use AYA to orient the conversation, then force the model back to specific evidence: ask what source supports the answer, what is uncertain, and what should be checked by a human.

Escalation

When to stop and review manually

Stop and review the underlying material yourself when the model output affects security controls, access rights, compliance posture, investor statements, financial decisions, vendor commitments, or system changes. Those are the cases where a polished but wrong answer causes real damage.

AYA is designed to improve the quality of the question. It does not move accountability away from the human operator.