Most prompting advice is written for general audiences. “Be specific.” “Give examples.” Useful, but shallow. As software developers, we have a different relationship with LLMs than someone drafting marketing copy. We think in types, interfaces, and constraints. We can leverage that.

Why Developer Prompting Is Different

When a marketer prompts an LLM, they want creative output. When a developer prompts one, they usually want correct output—code that compiles, queries that return the right rows, architectures that actually scale.

This changes the game. Creativity tolerates ambiguity. Correctness demands precision. The prompting techniques that matter most for developers are the ones that reduce ambiguity and constrain the output space.

Technique 1: Specify the Contract, Not Just the Goal

Bad prompts describe what you want. Good prompts describe the contract—inputs, outputs, constraints, and edge cases.

Weak prompt:

Write a function that validates passwords.

Strong prompt:

Write a Python function `validate_password(password: str) -> tuple[bool, list[str]]`
that checks a password against these rules:
- Minimum 12 characters
- At least one uppercase, one lowercase, one digit, one special character (!@#$%^&*)
- No more than 3 consecutive identical characters
- Not in the provided blocklist

Return (True, []) if valid, or (False, [list of violated rules as strings]).
Raise TypeError if password is not a string.

The second prompt reads like a docstring or type signature—because it is. You’re defining an interface. The LLM fills in the implementation.

This works because LLMs are pattern completers. When you give them a well-defined contract, they have far less room to hallucinate creative interpretations of what you meant.

Technique 2: Provide the Surrounding Context

LLMs don’t know your codebase. Every prompt starts from zero context. The more relevant context you provide, the better the output fits your system.

Without context:

Write a function to fetch user data from the database.

With context:

I'm using Python 3.12 with SQLAlchemy 2.0 and PostgreSQL.
Here's my existing User model:

class User(Base):
    __tablename__ = "users"
    id: Mapped[int] = mapped_column(primary_key=True)
    email: Mapped[str] = mapped_column(String(255), unique=True)
    display_name: Mapped[str | None] = mapped_column(String(100))
    created_at: Mapped[datetime] = mapped_column(server_default=func.now())

Write an async function `get_user_by_email(session: AsyncSession, email: str) -> User | None`
that fetches a user by email. Use select() style, not the legacy query API.

The context-rich prompt will produce code that slots directly into your project. The vague prompt will produce something generic that needs rewriting.

What context to include:

  • Language version and framework
  • Relevant type definitions, models, or interfaces
  • The API style your codebase uses (REST, GraphQL, RPC)
  • Error handling conventions (exceptions, Result types, error codes)
  • Existing patterns you want followed

You don’t need to paste your entire codebase. Include what’s adjacent—the types, functions, and conventions the generated code needs to interact with.

Technique 3: Constrain the Solution Space

Unconstrained prompts produce generic code. Constraints push the LLM toward the specific solution you need.

Unconstrained:

How do I handle errors in this API endpoint?

Constrained:

I have a FastAPI endpoint that calls three external services sequentially.
I want to:
- Use structured error responses with a consistent schema: {"error": str, "code": str, "details": dict}
- Distinguish between retryable errors (503, 429, timeouts) and permanent errors (400, 404)
- Log failed requests with correlation IDs
- Not use try/except at the handler level—use middleware or exception handlers instead

Show me how to implement this.

Each constraint eliminates a category of generic answers. By the time you’ve listed four or five constraints, the LLM is working within a narrow corridor that matches your actual architecture.

Technique 4: Show, Don’t Tell

Examples are the most powerful tool in your prompting toolkit. They communicate formatting, style, edge cases, and expectations more precisely than prose.

Without examples:

Write a migration script for adding a status column to the orders table.

With an example of your migration style:

Write an Alembic migration to add a `status` column to the `orders` table.

Here's how our existing migrations look:

def upgrade() -> None:
    op.add_column(
        "orders",
        sa.Column("priority", sa.SmallInteger(), nullable=False, server_default="0"),
    )
    op.create_index("ix_orders_priority", "orders", ["priority"])

def downgrade() -> None:
    op.drop_index("ix_orders_priority", table_name="orders")
    op.drop_column("orders", "priority")

The new status column should be a VARCHAR(20) with a default of 'pending'.
Add an index on it. Follow the same pattern as the example.

By showing an existing migration, you communicate your naming convention, your style preferences, and the level of detail you expect—all without explicitly stating those rules.

This technique scales to any kind of code generation: test patterns, API handlers, data transformations, CI configs. Find a good example in your codebase and include it in the prompt.

Technique 5: Think in Iterations, Not One-Shots

Developers instinctively write code iteratively—write, test, refine. Apply the same approach to prompting.

Round 1: Get the structure right

Design the class hierarchy for a notification system that supports
email, SMS, and push notifications. Just show me the abstract base class
and method signatures—no implementations yet.

Round 2: Fill in specifics

Good. Now implement the EmailNotification class. Use our SMTP config:
- Host: pulled from env var SMTP_HOST
- Use aiosmtplib for async sending
- HTML templates are in templates/email/ using Jinja2

Round 3: Handle edge cases

Add retry logic to the send method. Use tenacity with exponential backoff,
max 3 retries, only retry on connection errors and timeouts.

Each round builds on the last. You can course-correct between rounds instead of discovering problems after generating 200 lines in one shot.

One-shot prompts work for trivial tasks. For anything complex, iterate.

Technique 6: Ask for Reasoning Before Code

When you’re exploring a design space, ask the LLM to reason before it writes code. This surfaces assumptions and trade-offs you might miss.

I need to implement rate limiting for our API. Before writing code, walk me
through the trade-offs between:
1. Token bucket vs. sliding window algorithms
2. Redis-backed vs. in-memory rate limiting
3. Per-user vs. per-endpoint vs. combined limits

Consider that we run 4 API server instances behind a load balancer,
handle ~500 requests/second, and use Redis for session storage already.

After the analysis, recommend an approach and implement it.

This is the prompting equivalent of “think before you code.” The reasoning step often reveals constraints or failure modes that the LLM would otherwise silently handle incorrectly.

Technique 7: Use the LLM as a Reviewer, Not Just a Writer

Some of the highest-value prompts aren’t about generating code—they’re about reviewing it.

Review this function for correctness, performance, and edge cases:

async def transfer_funds(from_id: int, to_id: int, amount: Decimal) -> bool:
    from_account = await get_account(from_id)
    to_account = await get_account(to_id)

    if from_account.balance < amount:
        return False

    from_account.balance -= amount
    to_account.balance += amount

    await save_account(from_account)
    await save_account(to_account)
    return True

Specifically look for: race conditions, missing error handling,
and whether this would work correctly under concurrent requests.

This prompt leverages the LLM’s pattern matching against thousands of similar code review discussions. It’s particularly good at spotting the kind of bugs you become blind to when you’ve been staring at the same code all day.

Technique 8: Format Your Output Expectations

When you need structured output—JSON, YAML, specific code formatting—be explicit about it.

Generate TypeScript types for our API response. Output ONLY the type
definitions with no explanation. Use this format:

interface ApiResponse<T> {
  data: T;
  meta: { requestId: string; timestamp: number };
}

Generate response types for these endpoints:
- GET /users/:id -> single user with profile
- GET /users -> paginated list of users
- POST /users -> created user with generated fields

Being explicit about output format saves you from parsing prose to find the code buried inside. It also produces output you can paste directly into your editor.

Anti-Patterns to Avoid

1. The Kitchen Sink Prompt

Write a complete user authentication system with login, registration,
password reset, 2FA, OAuth, session management, rate limiting, audit
logging, and admin controls.

This will produce a mediocre version of everything and a good version of nothing. Break it down.

2. The Vague Refactor

Make this code better.

Better how? Faster? More readable? More testable? More maintainable? Each of these leads to different changes. Be specific about what “better” means.

3. Ignoring the Model’s Limitations

LLMs work from training data. They may not know your company’s internal framework, a library released last month, or the specifics of your production environment. If you’re working with something niche, include documentation snippets or examples rather than assuming the model knows it.

4. Trusting Without Verifying

No prompt, however well-crafted, guarantees correct output. Always:

  • Read the generated code before using it
  • Run the tests
  • Check edge cases the LLM might have missed
  • Verify security-sensitive code especially carefully

Prompting well reduces the rate of errors. It doesn’t eliminate them.

A Prompt Template for Daily Use

Here’s a template I find myself returning to:

**Context**: [Language, framework, relevant dependencies]
**Existing code**: [Adjacent types, interfaces, or functions]
**Task**: [What I need, specified as a contract]
**Constraints**:
  - [Constraint 1]
  - [Constraint 2]
  - [Constraint 3]
**Example**: [A similar pattern from the codebase, if applicable]
**Output format**: [Code only / explanation first / both]

You don’t need to use this literally. But mentally running through these categories before writing a prompt will reliably improve your results.

The Deeper Lesson

Effective prompting is just effective communication. The same skills that make you good at writing design docs, bug reports, and code reviews make you good at prompting. You’re specifying behavior for a system that needs to understand your intent.

The developers who struggle most with LLMs are the ones who treat them like magic—type a vague wish and hope for the best. The developers who get the most value treat them like a junior engineer: capable but literal-minded, productive when given clear specifications, unreliable when left to guess.

Write your prompts like you’d write a good ticket. Define what “done” looks like. Provide the context someone new to the codebase would need. Specify the constraints. Show examples of good output.

The model does the rest.