- Published on
What is AI engineering?
- Authors

- Name
- Pat Nadolny
- @pat_nadolny
The AI Engineer title is still pretty unclear and is often conflated with several other similar, but different, titles in the space. In this post, I'll describe how I think about the role of an AI engineer. Answering the question, what is AI engineering?
Short Answer
AI engineering is a modern subset of software engineering that leverages LLMs behind the scenes. It requires a new toolkit of frameworks and patterns but still heavily relies on the foundations of traditional software engineering.
Long Answer
What It's Not
This is where I see a lot of variability in job postings and reddit threads. There’s no single definition yet, but its helpful to first describe what it's not:
- Not ML research — you don’t need a PhD or to train models from scratch.
- Not data science — the focus isn’t analytics or dashboards.
- Not just prompt engineering — prompts are part of the toolkit, not the whole job.
- Not MLOps — you’re not running training pipelines or infra.
- Not plain backend — it builds on backend skills but adds new tools like retrieval and evaluations.
Some of these smell like AI engineering and there's definitely overlapping skills but they're distinct.
Mostly Backend Engineering
The majority of the skills needed for AI engineering are just traditional backend engineering skills. At the end of the day we're still building software using the same principles we always have. I like to think about this new evolution in a similar way to leveraging an external API or microservice within an app, say payment processing. Just because we decided to outsource payment processing to stripe doesn't mean the job is done, obviously. We still have to build the application and glue code, but we can outsource the business logic complexity to the external service.
Leveraging AI is the same in a lot of ways, you outsource parts of your application logic to AI (usually via API calls) to increase power and flexibility while also reducing development and maintenance time.
The Differences
With that said, there are some big differences in how software is designed when leveraging AI.
The two main features that we gained are:
- Reasoning
- Natural language business logic
Reasoning
AI engineers balance the tradeoffs of features that are better suited for traditional determintic logic vs non-determinstic AI evaluation.
I like to generalize the differences into two buckets:
- Things we could do before but are much easier now
- Things we could NOT do before that AI allows us to do now
There's overlap because some things were possible in the past but the level of difficulty made them not worth the effort. A few examples to illustrate. For 1 we could always parse PDFs to get structured data but it was difficult and very rigid. Now LLMs do this with ease with just a simple prompt as input. An example for 2 is outsourcing decision making to an AI model. Previously we had to write every branch of logic that the code would follow, if we didnt write it, it didn't know how to handle the situation. Now we can give instructions and input data to the AI model and let it decide what to do.
We've always been able to build software products that allow the user to define their own custom logic, a true platform, but it was very difficult. We either had to spend a ton of time building web apps that could do everything (the AWS console comes to mind) with a million little drop downs and buttons or allow users to insert code snippets that the platform runs. All the UI elements were overwhelming to users and adding code snippets to a web app was always a bad experience. It's a lot of work to give the user the power to build without overwhelming them with too many knobs. In modern AI software engineering we can expose prompts to solve this problem. Let any user, developer or not, define logic in natural language.
Natural Language Business Logic
The biggest shift with AI isn’t that we suddenly write software differently — it’s that business logic is highly flexible and defined in natural language, its stored in prompts and context instead of code. That makes software far more flexible and reactive.
Why Flexibility Matters
Building and shipping production software takes time. Making changes takes even longer — code reviews, deployments, migrations, tests. Engineering orgs spend enormous energy trying to shorten this cycle (CICD pipelines, isolated testing, AI-assisted IDEs, code review bots, etc.).
One way to accelerate development is to avoid code changes altogether by moving logic into data. This is the foundation of flexible software.
Logic as Data
Traditionally we used small bits of data, or user configurations, to alter application experiences. The user selects the dark mode setting in their profile and we have code that adjusts the experience to match. Or even futher, if were building a platform we'll expose entrypoints for users to define their own code snippets that we run for them. The outcome in both cases is that the user gets to customize their own experience with varying levels of flexibility. Usually more flexibility means more engineering effort to build it.
With AI systems were able to leverage this same approach but in a much more powerful, easy to implement, and user friendly way.
The AI Shift
Now, with LLMs, we have a new layer of flexibility. Prompts and context can serve as variable business logic:
- Prompts = the rules of operation (logic).
- Context = the inputs/state that logic operates on.
Instead of coding every condition, we can offload parts of the logic to a model. The application code becomes leaner — just the glue that connects database state and prompts/context to the LLM.
Here's how software flexibility has evolved:
| Traditional Software | AI-Enhanced Software |
|---|---|
| Rules live in code | Rules live in prompts |
| Changes need deployments | Changes are data updates |
| Fixed logic paths | Dynamic, context-aware responses |
| Scale by writing more code | Scale by improving prompts & context |
Outcome
This is what makes AI engineering different. By treating prompts and context as first-class application state, engineers can build lean, adaptable systems that evolve faster than traditional code-heavy software.
New Tools, New Challenges
As more of the software is driven by prompts, context, and calls to LLMs we needed new tools to solve the new challenges. AI engineers need to get up to speed on these new tools and techniques.
- Tool calling — extend LLMs with custom functions.
- Prompt engineering — use effective techniques to get reliable results.
- Model selection — balance cost, performance, speed, and quality. Every model has different characteristics.
- Context search (RAG) — retrieve knowledge via embeddings and vector databases.
- Evals — test and monitor non-deterministic logic.
- Agent frameworks — frameworks for abstract and orchestrate the complexity above.
Why It Matters
If you’re a software engineer today, learning these AI engineering skills is how you stay ahead.
You don’t need a PhD or deep ML research background — you already have the core engineering skills. The difference is picking up the new tools (prompts, context, retrieval, evaluations) and learning how to use them to build flexible, production-ready systems.
The engineers who adapt will be the ones shaping the next generation of software.
Summary
AI engineers are software engineers who combine core backend skills with new tools for working with LLMs. They don’t train models from scratch—they build production-ready systems that use prompts, context, and APIs to provide highly customizable products and outsource decision making.
The result is leaner, more flexible software that adapts faster and delivers more power with less code.