AI

My Fight with Figma's AI Agent: A Guide to Meta-Prompting

Alejandro Albarenga

January 22, 2026

"I’m still learning how to integrate these new AI tools into my actual daily workflow." That’s probably one of the phrases every UX designer is repeating right now.

The promise of Figma’s "Prompt to Edit" feature is incredible, but the operational reality can be... different. At first, I struggled to understand how to get real value out of it without feeling like I was wasting time.

This article documents how I transformed a frustrating "trial and error" process into an efficient workflow using a technique called Meta-Prompting.

Why is iterating with Figma's AI agent so difficult?

I started exactly how we all do: typing exactly what I needed directly into the Figma prompt bar.

  • My Prompt: "Change this section to a three-column grid."
  • The Result: Acceptable. A decent starting point.

The real challenge emerged when I tried to iterate. I quickly realized that Figma’s current agent struggles to interpret chained corrections. If I asked it to adjust a specific section, it would often "hallucinate" and alter elements that were already perfect, or the final design would lose visual coherence.

I felt like I was running in circles, spending more energy explaining the changes to the AI than it would have taken to just design them manually. My direct approach wasn't working.

How can ChatGPT help create better Figma prompts?

I realized the problem wasn't the tool, but the lack of structure in my request. Figma’s AI needs dense, precise context in a single shot to function effectively.

I decided to test Meta-Prompting.

What is Meta-Prompting in Design? Instead of trying to craft the perfect prompt myself ("User to Tool"), I use a conversational LLM (like Gemini, ChatGPT or Claude) as an intermediary ("User to AI to Tool").

What does a Meta-Prompting workflow look like in practice?

  1. Context: I explain to ChatGPT exactly what I want to achieve, the visual style, and the requirements.
  2. Generation: I ask ChatGPT to generate a highly detailed, technical, structured prompt specifically for the Figma agent.
  3. Execution: I copy that prompt and paste it into Figma.

It was a learning process, but the results were immediate. It is far more effective to iterate the text in ChatGPT until the logic is bulletproof, and only then pass it to Figma for clean execution.

Does the quality of your Design System affect AI results?

There is a second variable in this equation: the quality of your base file. We learned that AI cannot perform magic if the underlying structure is chaotic.

  • With unpolished systems: The result is poor and inconsistent.
  • With robust systems: When we decided to use the Kaizen Softworks Wireframe Kit, the quality improved dramatically.

Why? Because the agent finally had solid components (with Auto Layout and properly named variants) to "latch onto." The AI understands logical structures better than loose pixels. 

(Technical Note: Even with a good system, we found bugs. For example, the agent still struggles to correctly map FontAwesome icons, often requiring manual adjustment).

What are the key takeaways for AI-assisted design?

If you are struggling with design agents, consider these points:

  • Context is King: You don't always need the design tool to have the best chat interface. Sometimes, the key is managing that context externally (in ChatGPT) and importing it.
  • One-Shot vs. Iteration: Figma works better with complete, robust instructions from the start (One-Shot) rather than a long chain of small corrections.
  • Garbage In, Garbage Out: If your design system lacks clear naming conventions and structure, the AI won't be able to infer the logic.

Q&A: Common Questions on Figma AI

Is "Prompt to Edit" ready for complex projects? For generating initial structures or rapid variations, yes. For final "pixel-perfect" polish, human oversight is still mandatory.

What is a "One-Shot Prompt"? It is a single instruction that contains all necessary information (style, constraints, content) for the AI to complete the task in one attempt, without needing follow-up questions.

Why use ChatGPT to write Figma prompts? Language Models (LLMs) are better at structuring logic. They ensure your prompt is unambiguous, preventing Figma's agent from misinterpreting your intent.

How do you handle iterations with design agents? 

Have you found a seamless workflow, or are you still in the trial-and-error phase like me?

Alejandro Albarenga

Interaction & Motion Designer