The software development landscape is rapidly changing, driven by the proliferation of artificial intelligence tools. These AI code tools fall into two primary categories: generators, which aim to produce entire codebases from prompts, and assistants, which integrate directly into the developer's workflow. The fundamental architectural and philosophical differences between these approaches reshape how developers work.
Ivan Liagushkin, a software developer with over 10 years of experience building large-scale web applications, offers insights into this evolving field. He is in charge of engineering at Twain, an AI copywriter startup backed by Sequoia Capital.
"Tools like v0.dev and GitHub Copilot may seem similar, but they are fundamentally different philosophically," Liagushkin said. "Generators primarily compete with no-code and low-code platforms, targeting non-developer professionals. Coding assistants, in contrast, aim to transform everyday coding workflows."
Generators like v0.dev from Vercel and bolt.new from StackBlitz are designed to enable rapid prototyping and MVP launching. They are often opinionated about the technologies they use, promoting specific tools and platforms.
"These generators are highly opinionated about the technologies they use and often promote specific tools for users to subscribe to," Liagushkin said. "For instance, both bolt.new and Lovable promote the Supabase development platform, while v0.dev naturally promotes Vercel hosting."
Coding assistants, on the other hand, focus on seamless integration into existing workflows, understanding codebases, and providing universal tooling across technologies. They are designed to be helpful for both individual developers and teams.
"Coding assistants aim to transform everyday coding," Liagushkin said. "It's vital for them to make sense for single developers and teams in particular. Cursor Editor looks especially promising, providing a convenient way to share and scale LLM instructions with so-called 'cursor rules.'"
The underlying architecture of these tools is similar, with the primary difference in the user interface and context augmentation approaches. The core component is the large language model (LLM).
"The key component is the LLM itself," Liagushkin said. "All generators mentioned rely on Anthropic's Claude 3.5 Sonnet, the state-of-the-art coding model for a long time, surpassed only by its successor Claude 3.7 Sonnet. Coding assistants, however, allow switching between models."
These tools do not typically fine-tune the models but rely on advanced prompting techniques. Open-source tools like bolt.new provide insights into the architecture.
"Thanks to bolt.new being open-source, we can examine what's used," Liagushkin said. "The core system prompt explains to the LLM its execution environment and available actions: creating and editing files, running shell commands, searching codebases, and using external tools. Prompts are well-structured with XML-style formatting and use one-shot learning to reduce hallucinations and inconsistencies."
Managing context, especially for large codebases, is a significant challenge. Assistants index codebases and use vector databases for full-text search.
"The biggest challenge is providing LLMs with proper context," Liagushkin said. "It's essential to feed the right parts of the right files along with corresponding modules, documentation, and requirements. Assistants index the codebase, creating tree-shaped data structures to monitor file changes, then chunk and embed files in vector databases for full-text search."
Despite their power, AI coding tools have limitations. The "70% problem," articulated by Addy Osmani, highlights their struggle with the final 30% of code requiring robustness and maintainability.
"The '70% problem' perfectly describes AI coding tools' fundamental limitation: they can quickly generate code that gets you 70% of the way there but struggle with the crucial final 30% that makes software production-ready, maintainable, and robust," Liagushkin said.
Addressing these limitations involves improving model accuracy, advancing agentic architectures, and enhancing prompting techniques.
"This problem will be solved in three different ways," Liagushkin said. "First, models will become more accurate. Secondly, coding assistants' architecture will advance through agentic approaches. Lastly, we will change. Everyone will learn effective prompting techniques."
At Twain, Liagushkin has experienced similar limitations in developing AI copywriters. Strategies to mitigate these include LLM request caching, model juggling, and prompt preprocessing.
"The only difference between coding assistants and Twain is that coding assistants produce code, while Twain produces personalized messages of human-written quality," Liagushkin said. "The challenges remain the same though - to be valuable, we must generate copies fast, cost-effective, and keep them free of hallucinations."
Looking ahead, Liagushkin anticipates significant advancements in model quality and workflow evolution. However, he emphasizes that technology adoption remains a critical factor.
"The progress in AI model quality is astonishing, and we should expect models to become even more accurate, stable, and cost-effective," Liagushkin said. "However, I believe that truly transformative changes in coding processes will come not primarily from engineering and AI breakthroughs but from workflow and mindset evolution."
Ethical considerations, particularly data security, are also paramount. Liagushkin suggests deploying coding LLMs within local networks and using visibility restriction tools.
"Ethical considerations primarily concern data security -- a significant but technically solvable problem," Liagushkin said. "Coding LLMs can be deployed within organizations' local networks, with visibility restriction tools designed to isolate sensitive code sections."