In software engineering, your output - the code that you write, is only as good as what you have learned, experienced and understood from your conversations with the client, which is the input. This is true when working with Large Language Models (LLMs).

If you use GenAI tools to complement your expertise in software engineering, then here are some strategies you can use to move beyond basic code generation and into collaborative development, or better pair programming with AI.

1. The Q&A Strategy (The “Reverse Prompt”)

Instead of trying to write the “perfect” prompt upfront, let the AI act as the requirements gathering engineer. Let it ask you the questions, perhaps making you think of things you hadn’t thought of before.

The Technique

Ask your LLM to propose a solution but clearly state that it must ask you a series of yes/no or clarifying questions before it provides the final recommendation.

The Value

This forces the model to identify the context it is missing (auth needs, API styles, architectural patterns) from your initial prompt, which you might have missed. Without this step, the AI might proceed by making assumptions, which you probably would not have liked.

2. The Pros and Cons Strategy

In software engineering, there is rarely one “right” way to solve a problem. There are always going to be trade-offs. That’s what keeps it interesting.

The Technique

When asking for a code implementation (like a database connection class), specifically request multiple patterns along with the pros and cons of each.

The Value

This helps you identify potential pitfalls like memory leaks, resource exhaustion, or inflexibility before the code hits your editor. It treats the AI as a senior peer reviewer rather than just a code monkey. This is my favourite strategy when coding with LLMs.

3. Stepwise Chain of Thought

Whether it is a human creating a pull request or your favourite LLM generating code, you really don’t want to encourage a world where you have to spend 3 hours reviewing a change because it impacts 20 different files and about 500 lines of code! Whether it is AI or a Human doing this, this is a red flag. For AI, specific small tasks are easier to finish correctly than vague ambiguous tasks. When AI tries to do too much at once, it could hallucinate and create undesirable results.

The Technique

Instruct the LLM to do the work one step at a time and, crucially, to wait for you to type a specific keyword, like “next” before proceeding to the next change.

The Value

This allows you to validate and “apply” each small change (e.g., converting var to const, then refactoring a specific variable, then extracting a method) incrementally. It keeps the context window clean and the developer in total control of the refactor. Clearer changes, that you can explain but written by the LLM.

4. The Roleplay

I hope you have learned by now that LLMs perform significantly better when assigned a specific persona. Asking your LLM to be an expert in something, will give you excellent responses when querying it about that particular topic. So use it to your advantage.

The Technique

Assign the AI a role (e.g., “An expert technical teacher who simplifies complex topics”) and combine it with the Stepwise strategy mentioned earlier.

The Value

By telling the AI not to give you the answer but to “nudge” you when you’re wrong, you turn the tool into a personalized pair-programming mentor.

The Bottom Line here is that: Stop working harder to write the perfect prompt. Use these strategies to make the AI work harder for you.

Check out the video by VSCode team on Youtube.