The use of AI by AI engineers

Bartosz Gonczarek Autor
Bartosz Gonczarek
Vice President, Co-founder
How AI engineers use AI — thumbnail
Category Post
Table of content

So, how do you do it? —  We’re often asked by our customers — Being experts in AI, how do you actually support yourself with AI when coding AI solutions? 

To understand how such a question puts us in an uncomfortable spot, try to ask a magician about his trick.  Wizards, too, rarely speak of their magic, but since the question reoccurs constantly, we decided to give you a sneak peek of what’s behind the curtain where the magic happens. Let’s begin by dispelling a myth first.

Is the coding no more with AI?

Andrej Karpathy, one of the godfathers of the AI field, gave birth to an idea that coding might become irrelevant at some point, as we might soon fully rely on LLMs for coding. He called this the “Vibe Coding” —  where all that it would be to unlease oneself from the burden of analytical thinking and just go with the flow, follow the vibe, and embrace the exponentials that code-generators offer. The trend caught momentum with the proliferation of low-code and no-code platforms that made the headlines, offering a chance to build entire SaaS platforms with zero handwritten code, all thanks to AI code generators. Welcome to the future. A future in which users, such as Leo (known as @leojr94_ on “X” ) stress-tested this potential by building with AI tools a fully functional solution. With initial pride, he shared that:

“AI is no longer just an assistant, it’s also the builder.
You can continue to whine about it or start building”

Yet two days later, the system he built with AI was on fire.

His example illustrates the risks of not fully knowing what the generated code does and what vulnerabilities it opens unwillingly. Leo shut down the project a few days later, admitting his mistake, and started to rebuild his platform in a different way. By that, he gained an understanding of what it takes to build (or prompt) a complex solution. But Leo wasn’t an engineer to see the risk and limitations coming. He had to experience them first. Let’s see what actual senior engineers, who not only understand computer code but also math and data structures behind it, think about what AI tools are good for in software development.

What LLMs are good for in coding? 

One of the senior engineers of Vstorm admitted, ”Thank God we learned how to code before The Cursor”. 

I asked several AI engineers who use AI in coding AI solutions to converge on one statement first before getting into the details of how they use it. One thing they mutually agreed on is that prompting code proved to all of them that the perspective on the expected results was too narrow, so the code generator omitted important aspects that had to be manually added on.

As each one of them leaned on LLMs in a different way, the support they got fell into a few recurring schemes. The list below shows the ways LLMs are leveraged by AI professionals in coding:

  • Brainstorming solutions: LLMs are surprisingly good for that. Asking follow-up questions. Before implementing, it’s worth taking a task description and talking about it with an LLM. Gemini 2.5 Pro is particularly good at this — it often spits out a few different options on its own.
  • Code autocompletion: The Cursor has a very good autocomplete (much better than Copilot). When using Cursor to accelerate code creation with autocomplete, it’s worth adding a system prompt to give it information about the specific style of coding
  • Debunking genius in coding: LLMs are great for criticizing coding ideas, especially reasoning models that exist. If we’re working on something more complicated, it’s worth throwing the code into an LLM and asking for criticism. It’s worth making a prompt system to deliver unobfuscated feedback by setting it this way: “When I request feedback or critique, I prefer it to be direct, unfiltered and harsh”.
  • Use of auxiliary technologies:  Cursor copes surprisingly well in situations where a command-line interface needs to be used to interact with auxiliary solutions, such as Docker containers. It understands various commands and is able to figure out step by step which command will give the expected effect. My favorite example is running a migration in a Docker container. The Cursor agent can perform several operations while analysing the logs to figure out what to do next.

All of these examples share one thing in common — the person writing the code knows what he is doing, but can do it better (and faster) with the insightful helping hand of LLMs in execution.  Even before coding starts, the AI tools they use have proven to be helpful in ideation — not by spotting solutions (a person usually does that), but, surprisingly, asking the right questions.

Finding the right questions before coding begins

Before coding, at the phase when a problem needs to be well understood so the solution can be mapped out, asking questions and following up on questions is something LLMs are very good at.

The way this can be done is by letting LLMs to analyze existing code along with a task description to create a step-by-step plan of what needs to be done. Once the plan is ready, the engineer verifies it and corrects it, instead of starting from scratch. But that’s not all; at the end of designing the final solution, you can ask to verify whether the entire plan has been completed based on the code written and highlight the missing parts.

Using auxiliary systems without being an expert

When working on an AI solution, there are plenty of technologies our engineers need to engage with. From API calls to third-party systems to CSS on the interface side, the complexity of their task proves that in our complex world with a plethora of systems, it’s hard to be an expert in everything. Here, the LLMs come with a helping hand.

Prompting a code for an unknown solution with OpenAI O3, with its chain of thoughts and ability to search the web for an answer, makes it incredibly easy to muster a code snippet without a painstaking review of the documentation. The system can seek out new features and functions and offer a code that leverages them in no time. At that point, knowing the basics of the solution is enough to verify the results, so one doesn’t need to be an expert in auxiliary tech to effectively use it in the project. It is enough to “follow the thread” to its end and get the clues faster than one would manually.

Knowing what to ask for has always been the key 

The change that these tools and techniques bring is similar to access to the Internet. While some early users got stuck on AOL pages, others googled solutions up in the wilderness of the net because they knew how. The AOL pages were a “walled garden”, a harbour offering content, chat rooms, and emails in a confined, familiar yet limited space

In the time of Large Language Models, just ‘prompting code’ would be similar to AOL content — seemingly powerful on the surface but insulated, often limited in authoring and expression, still unable to challenge real human intellect.  But powerful coding assistance tools and techniques can successfully boost the human intellect if only one knows how to leverage them, just as early Internet users knew the forbidden ways of googling answers up.

Talk to a knowledgeable expert, not a chatbot

Bart, PhD economist and our co-founder, is ready to leverage his hands-on experience of:

  • Entrepreneurial, and C-level roles
  • Exited and supported startups
  • Executive Consulting background

to discuss your project on a 20-minute introductory call.

Dr. Bart Gonczarek

Vice President

The LLM Book

The LLM Book explores the world of Artificial Intelligence and Large Language Models, examining their capabilities, technology, and adaptation.

Read it now